2018-11-29

KeePassDroid Fingerprint Sensor Issue

If KeePassDroid shows a message, that there is a problem with the fingerprint sensor:

  1. Go to the Android App Sessings => Apps => KeePassDroid => Data 
    1. => empty the cache
  2. If this doesn't help.
    Make a backup of your KeePass data file
    and do the same as on 1. and delete all the data (instead of a.).
I could solve this problem on my Samsung Galaxy S9 already one month ago and only could take my time now to write this down. That's why I do not remember exactly which method I used.

If this helped you, please give me a feedback, so that I can update this blog entry.

(I know, this is a trivial hint and it is almost embarrassing to admit, but it took me a some time to get the this idea.)

2018-11-20

ORA-65011 Pluggable database ... does not exist


PROBLEM: ORA-65011 Pluggable database ... does not exist

If trying to start the DB service ...
OS> srvctl start service -s MY_DB_SERVICE -d MY_CDB_UNIQUE_NAME -pdb MY_PDB_SID

... you get the following error ORA-65011 ...
PRCD-1084 : Failed to start service MY_DB_SERVICE
PRCR-1079 : Failed to start resource ora. MY_CDB_UNIQUE_NAME.MY_DB_SERVICE.svc
ORA-65011: Pluggable database MY_CDB_UNIQUE_NAME does not exist.
CRS-5017: The resource action "ora. MY_CDB_UNIQUE_NAME.MY_DB_SERVICE.svc start" encountered the following error:
ORA-65011: Pluggable database MY_PDB_SD does not exist.
. For details refer to "(:CLSN00107:)" in "/u00/app/oracle/diag/crs/my_host/crs/trace/ohasd_oraagent_oracle.trc".
CRS-2674: Start of 'ora.CDB_UNIQUE_NAME.MY_DB_SERVICE.svc' on 'my_host' failed

(Since I has just created the DB using RMAN dupplicate and skipping some tablespaces, I though that my PDB (pluggable DB) had problems because of missing tablespaces.)

... but, as the error says, there is no PDB with the given name:
OS> oerr ORA 65011
65011, 00000, "Pluggable database %s does not exist."
// *Cause:  User attempted to specify a pluggable database
//          that does not exist.
// *Action: Check DBA_PDBS to see if it exists.


SOLUTION
Check how the service has been configured ...
I remarked that the name of my PDB (MY_PDB_SID)
was misspelled (MY_PDB_SD <= missing "I")
OS> srvctl config service -s MY_DB_SERVICE -db MY_CDB_UNIQUE_NAME
Service name: MY_DB_SERVICE
Cardinality: SINGLETON
Service role: PRIMARY
. . .
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Pluggable database name: MY_PDB_SD
Maximum lag time: ANY
SQL Translation Profile:
. . .
GSM Flags: 0
Service is enabled

... and re-create the DB service, using the correct PDB name
(So I re-created my MY_DB_SERVICE)
OS> srvctl remove service -s MY_DB_SERVICE -db MY_CDB_UNIQUE_NAME
OS> srvctl add service -service MY_DB_SERVICE -db MY_CDB_UNIQUE_NAME -pdb MY_PDB_SID -l primary/u00/app/oracle

Then it worked
OS> srvctl start service -s MY_DB_SERVICE -db MY_CDB_UNIQUE_NAME
OS> srvctl status service -s MY_DB_SERVICE -db MY_CDB_UNIQUE_NAME
Service MY_DB_SERVICE is running


2018-11-16

Oracle Database: How To "Shrink" the UNDO Tablespace



-- Create "temporary" undo tablespace and make it the default undo tablespace
create undo tablespace UNDO_TMP datafile '+U01' size 2G AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED;
alter system set undo_tablespace=UNDO_TMP;

-- Drop the original undo tablespace
drop tablespace UNDOTS including contents and datafiles;

-- Recreate the old undo tablespace (smaler) and make it the default undo tablespace again
create undo tablespace UNDOTS datafile '+U01' size 2G AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED;
alter system set undo_tablespace=UNDOTS;

-- Drop the "temporary" undo tablespace
drop tablespace UNDO_TMP including contents and datafiles;

2018-11-13

Oracle Enterprise Manager / Cloud Control: Historical Data Not Showing Issue

Problem Description

Last week after one of Oracle Database hosts had to be rebooted because of memory problem. I had the issue with Oracle Enterprise Manager / Cloud Control that historical data of this DB (only for this DB) was not showing any more:

  • In "Performance" => "Performance Home" => "View Data: Historical"
    Error Message
    Data Not Available. Statspack data is not available for this database instance. Make sure that Statspack is installed on the target instance.
  • In "Performance" => "Top Activity"  => "View Data: Historical"
    Error Message
    (No Data Available)
  • In: "Performance"  => "ASH Analytics" in "Top Activity"
    Error
    No useful data was showing

Possible Solution

Completely logout of Enterprise Manager (not only from the DB) and logon back.

2018-08-15

ORA-16548: database not enabled


PROBLEM

The standby DB is/stays disabled (cannot be enabled)

DGMGRL> show configuration;

Configuration - MYDB

  Protection Mode: MaxAvailability
  Databases:
    MYDB_PRIMARY - Primary database
    MYDB_STANDBY - Physical standby database (disabled)

Fast-Start Failover: DISABLED
OR
DGMGRL> edit database MYDB_STANDBY . . .
Error: ORA-16548: database not enabled

and the . . .
DGMGRL> enable database MYDB_STANDBY;
Enabled.
. . . does not work. Despite showing "Enabled." the DB is not really enabled.

The DataGuard Log file drc<SID>.log(cd /u00/app/oracle/diag/rdbms/mydb_primary/mydb/trace
or - withTrivadis Baseenv:  cdd; cd trace
shows:
. . .
ENABLE DATABASE MYDB_STANDBY
Warning, database MYDB_STANDBY that was marked for re-creation
      will be re-enabled. There may be errors or warnings
      if the database was not properly re-created. See this
      log and the alert log for more details.
Metadata Resync failed. Status = ORA-16603
. . .


POSSIBLE CAUSE

I caused the problem myself, because I've tried to solve a Data Guard problem by re-creating the configuration using a configuration name in upper case letters instead of re-using the same name as before (with lower case letters).

POSSIBLE SOLUTION

On the STANDBY DB
DGMGRL> connect sys/<Pwd>@MYDB_PRIMARY
Connected.
DGMGRL> remove configuration;
Error: ORA-16627: operation disallowed since no standby databases would remain to support protection mode

Failed.
DGMGRL> disable configuration;
Disabled.
DGMGRL> remove configuration;
Removed configuration

On the PRIMARY DB
DGMGRL> connect sys/<Pwd>@MYDB_PRIMARY

DGMGRL> enable database MYDB_STANDBY;
Enabled.


2018-08-13

Handling "org.gradle.api.GradleException: Lint found fatal errors while assembling a release target"


If the following error occurs when building the app . . .
org.gradle.api.GradleException: Lint found fatal errors while assembling a release target.

To proceed, either fix the issues identified by lint, or modify your build script as follows:
...
android {
    lintOptions {
        checkReleaseBuilds false
        // Or, if you prefer, you can continue to check for errors in release builds,
        // but continue the build even when errors are found:
        abortOnError false
    }
}
. . .

Instead of simply adding the android block as suggested by the message, perform a manual lint check as follows
"Analyze" -> "Inspect code .."

Most of the time the problems I had, were that some translation was missing.

2018-08-10

Solving the Problem: Former Primary DB cannot Flashback/Reinstate to Become Standby (ORA-38754)


The following error is listed in the DB alert log file:
2018-08-10T14:26:35.511435+02:00
FLASHBACK DATABASE TO SCN 3991690
ORA-38754 signalled during: FLASHBACK DATABASE TO SCN 3991690...


Look at the DataGuard log file
${ORACLE_BASE}/diag/rdbms/<DB_UNIQUE_NAME>/${ORACLE_SID}/trace/drc${ORACLE_SID}.log
(If you are using Trivadis' BaseEnv: cdd; cd trace; view drc${ORACLE_SID}.log)
. . .
Flashback SCN is 3991690; DB checkpoint SCN is 3991507. Flashback to SCN 3991690.
SQL Execution error=604, sql=[FLASHBACK DATABASE TO SCN 3991690]. See error stack below.
  ORA-00604: error occurred at recursive SQL level 1
  ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
  ORA-38762: redo logs needed for SCN 3988974 to SCN 3991690
  ORA-38761: redo log sequence 817 in thread 1, incarnation 1 could
viability check: standby database is not viable
Unable to flashback old primary database to SCN 3991690, error = ORA-38761
Reinstatement is not possible if there are insufficient flashback logs
This database may need be to be re-created from a copy
 of the new primary database if the flashback issue cannot be resolved.
. . .


"Logon" with RMAN . . .
rman target / catalog /@<TNS-Alias to RMAN-Catalog>

. . . and restore the archive logs
RUN {
  ALLOCATE CHANNEL ch1 TYPE 'sbt_tape' PARMS="SBT_LIBRARY=/opt/hds/Base/libobk.so";
  # ALLOCATE CHANNEL ch1 TYPE disk;
  RESTORE ARCHIVELOG SCN BETWEEN 3988974 AND 3991690;
}

The DB should automatically detect the restored archive logs and be able to flashback/reinstate becoming the new standby DB.

2018-07-31

How To Check if ASM Diskgroup is Not Overbooked


The query below can be used in order to recognize overbooked ASM diskgroups.
The value of column real_avail_gb should be >= 0

-----------------------------------------------------------------------
-- total_gb      : Physical size of the disk group in GB
-- usable_gb     : Corresponds to usable_file_mb in GB
--                 i.e. Amount of free space that can be safely
--                 utilized 
--                 taking mirroring into account and yet be able to
--                 restore redundancy after a disk failure
-- df_maxsizes_gb: Sum of the datafiles max sizes in GB
-- real_avail_gb : (total_gb - real_avail_gb)
--                 Shows if there would be enough space on the
--                 diskgroup even if all files would grow up to their
--                 maxsize
-----------------------------------------------------------------------
SELECT name, total_gb, usable_gb, df_maxsizes_gb,
       (total_gb - df_maxsizes_gb) real_avail_gb
  FROM
  (
    SELECT name, total_mb/1024 total_GB,
           -- free_mb/1024 free_gb,
           usable_file_mb/1024 usable_gb,
           DF.maxsize_gb df_maxsizes_gb
      FROM v$asm_diskgroup,
           ( 
             SELECT SUBSTR(file_name, 2, 
                          INSTR(file_name, '/', 1) - 2) disk_group,
                   SUM(DECODE(maxbytes,0,bytes/(1024*1024*1024),
                              maxbytes/(1024*1024*1024))
                      ) maxsize_gb
               FROM sys.dba_data_files
              GROUP BY SUBSTR(file_name, 2, INSTR(file_name, '/', 1) - 2)
           ) DF
     WHERE DF.disk_group = name
  )
 ORDER BY 1;

Example output:
NAME
TOTAL_GB
USABLE_GB
DF_MAXSIZES_GB
REAL_AVAIL_GB
U01
1600
236.484375
1504.064453125
95.935546875



The query from above was built based on the following queries
-----------------------------------------------------------------------
-- Quering the ASM diskgroups directly from the DB
-- (no need to logon to the ASM DB instance)
-----------------------------------------------------------------------
SELECT name,
       total_mb/1024 total_GB,
       free_mb/1024 free_gb,
       usable_file_mb/1024 usable_gb
  FROM v$asm_diskgroup;


-----------------------------------------------------------------------
-- Tablespace MaxSizes by ASM diskgroup
-- Precondition: Diskgroup starts with +.
-- For example: file_name="+U01/folder/file.ext" => disk_group="U01" 
-----------------------------------------------------------------------
SELECT SUBSTR(file_name, 2, INSTR(file_name, '/', 1) - 2) disk_group,
       tablespace_name,
       SUM(DECODE(maxbytes,0,bytes/(1024*1024*1024),
           maxbytes/(1024*1024*1024))
       ) maxsize_gb
  FROM sys.dba_data_files
 GROUP BY SUBSTR(file_name, 2, INSTR(file_name, '/', 1) - 2), tablespace_name
 ORDER BY 1;


2018-07-04

Oracle SQL Developer Showing Duplicate/Multiple DB Alias

If Oracle SQL Developer is listing multiple network alias (TNS alias) for a DB, check in your "default tnsnames location" for multiple tnsnames* files.

Oracle SQL Developer joins the entries of all files matching tnsnames*

In my case I had two files tnsnames.ora and tnsnames.ora_OLD in the same directory.
(Some of the alias did not work, because they came from tnsnames.ora_OLD)


I hope it helps.

2018-06-06

How To DB Restore an Oracle DB on Another Host


Precondition on the host where the DB will be restored to (called test host / TEST_DB):
  • this description is valid for Linux
  • an own installed test DB, so that the binaries and directory structure are ready
  • enough storage like the original DB to be restored

The original DB to be restored on a different host (test host) will be called:
SID           : ORIG_DB
DB_UNIQUE_NAME: ORIG_DB_SITE1

All steps are performed on the test host.
If running stop your test DB.

Go to the admin directory
cd /u00/app/oracle/admin

Perhaps delete not log files that are not needed in order to save place, if your /u00 is not big enough
cd TEST_DB
find . \( -name "*.log" -o -name "*.aud" \) -exec rm {} \;

Create Oracle admin/SID directory as copy of your TEST_DB with the name of the DB 
cd ..
cp -R TEST_DB ORIG_DB

Insert the following line in file /etc/oratab 
ORIG_DB:/u00/app/oracle/product/12.2.0.1:N

Re-Logon as OS oracle user.
For example:
exit
sudo -u oracle sudosh

Set your Linux environment.
For example (using Baseenv from Triivadis):  
ORIG_DB

Two variants how to prepare the DB init file:
A) copy the init file from the ORIG_DB to your test host
B) edit the init file of your TEST_DB (if you do not have access to the original DB init file any more)

For both variants, here is the target directory where the DB init files are located:
cd /u00/app/oracle/admin/${ORACLE_SID}/pfile
A) the copy variant should be clear

B) edit variant
Rename the init file of your TEST_DB in order to match your ORIG_DB
mv initTEST_DB.ora initORIG_DB.ora
vi initORIG_DB.ora

In the initORIG_DB.ora, replace all TEST_DB by ORIG_DB substrings.
The following parameters are concerned:
    • control_files
    • db_name
    • db_unique_name
    • audit_file_dest

Perhaps delete other non used init*.ora, spfile*.ora and pfile.ora from this directory in order to avoid confusion.
ATTENTION: do not delete your prepared initORIG_DB.ora file

Create a symbolic link to your new initORIG_DB.ora file
cd $ORACLE_HOME/dbs
ln -s /u00/app/oracle/admin/${ORACLE_SID}/pfile/init${ORACLE_SID}.ora init${ORACLE_SID}.ora
  
Result. For Example:
oracle@test_host [ORIG_DB] ll
. . .
lrwxrwxrwx  1 oracle dba        49 Jun  5 11:22 initORIG_DB.ora -> /u00/app/oracle/admin/ORIG_DB/pfile/initORIG_DB.ora
. . .

Now you are ready to perform your restore.

Here is an example of an RMAN command files.
You will need to adapt:
  • how you connect to your RMAN-Catalog DB
  • the number of RMAN channels
  • the RMAN channels type and parameters (PARMS) (look at the RMAN log files of the ORIG_DB)
  • adapt the DBID
  • adapt the SCN for your needs / or use a time instead of SCN

You should be able to find the DBID in the logs of the RMAN backups of the ORIG_DB.
For example:
. . .
RMAN>
connected to target database: ORIG_DB (DBID=767753736)
. . .

You can also query the real ORIG_DB (not the new temporary DB on the test host) for the DBID (if available).
For example:
SQL> SELECT dbid FROM v$database;

      DBID
----------
 767753736

Startup the DB in nomount mode
SQL> startup nomount

Copy the controlfile of the ORIG_DB to the test host ***  . . .
(Assuming you have this controlfile in ASM file sytem or backup of the file system)
. . . and "restore" this controlfile:
rman TARGET/ NOCATALOG
SET DBID=767753736
RESTORE CONTROLFILE FROM "<Path to *** from above>";

ALTER DATABASE MOUNT;

Or, if you need to use the RMAN catalog (which will cause RMAN problems with the "original DB"):
RMAN command file that restore the DB control and start the DB in mount mode.
connect target /
connect catalog /@<Your RMANCATALOG>

SET DBID 767753736;

run {
  ALLOCATE CHANNEL CH1 TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/opt/hds/Base/libobk.so";
  SET UNTIL SCN=1815512098911;
  RESTORE CONTROLFILE;
  RELEASE CHANNEL CH1;
}
ALTER DATABASE MOUNT;

Or, if you need to use the RMAN catalog (which will cause RMAN problems with the ORIG_DB)

connect target /
connect catalog /@<Your RMAN catalog alias>

SET DBID 767753736;

run {
  ALLOCATE CHANNEL CH1 TYPE 'SBT_TAPE' PARMS="SBT_LIBRARY=/opt/hds/Base/libobk.so";
  SET UNTIL SCN=1815512098911;
  RESTORE CONTROLFILE;
  RELEASE CHANNEL CH1;
}
ALTER DATABASE MOUNT;

ATENTION: After having used the RMAN catalog from another DB with the same DBID (here the TEST_DB), you will bee to go to the ORIG_DB and perform an RMAN crosscheck:

OS> rman target / catalog /@<Your RMAN catalog alias>

RMAN> crosscheck archivelog all;


Now the DB can be restored. (Again adapt the yellow entries)
connect target /
connect nocatalog

#SET DBID 767753736;

RUN
{
  ALLOCATE CHANNEL CH1 TYPE 'SBT_TAPE' PARMS='SBT_LIBRARY=/opt/hds/Base/libobk.so';
  ALLOCATE CHANNEL CH2 TYPE 'SBT_TAPE' PARMS='SBT_LIBRARY=/opt/hds/Base/libobk.so';
  ALLOCATE CHANNEL CH3 TYPE 'SBT_TAPE' PARMS='SBT_LIBRARY=/opt/hds/Base/libobk.so';
  ALLOCATE CHANNEL CH4 TYPE 'SBT_TAPE' PARMS='SBT_LIBRARY=/opt/hds/Base/libobk.so';
  SET UNTIL SCN=1815512098911;
  RESTORE DATABASE;
  RECOVER DATABASE;
  #ALTER DATABASE OPEN RESETLOGS;
}






2018-05-05

Firestore: Reading Document Issue - Document Can Not Be Found


For my Android App I am using Googles' Firebase.Firestore NOSQL database in order to store some data in a cloud DB.

I has no problem to save the data, but retrieving was not working.
So I reduced my test data structure to a minimum and tried all methods described in the documentation without success.

My - not working - minimal test data structure was:



This structure represents a collection of tests ("Tsts") of my App, containing one document for each version of my App, in this case version 11.
The document 11 contains a collection ("Devs") of devices for each device on with my app has been tested.

The solution of the problem was adding a field to the document 11, for example like this:



For testing how to read the data, I used the following code based on the Firestore documentation (Sub-chapter: Get a document).
DocumentReference docRef1 = mDB.collection("Tsts").document("11");
docRef1.get().addOnCompleteListener(new OnCompleteListener<DocumentSnapshot>() {
    @Override
    public void onComplete(@NonNull Task<DocumentSnapshot> task) {
        if (task.isSuccessful()) {
            DocumentSnapshot document = task.getResult();
            if (document.exists()) {
                mLog.debug("DocumentSnapshot data: {}", document.getData());
            } else {
                mLog.debug("No document");
            }
        } else {
            mLog.debug("get failed with ", task.getException());
        }
    }
});

Only after adding the test field "field1", I could read something.
Before adding field1, I always got the message "No document" in the logs.

I hope this info helps someone.
[] 

2018-05-02

Flashback Database After Failover: Correcting "Error ORA-38754: FLASHBACK DATABASE not started"


If the alert log file of the former primary DB contains the following error messages:
. . .
FLASHBACK DATABASE TO SCN 6693334
ORA-38754 signalled during: FLASHBACK DATABASE TO SCN 6693334...
. . .

Try to perform a flashback manually and you might get the following error:
SQL> FLASHBACK DATABASE TO SCN 6693334;
FLASHBACK DATABASE TO SCN 6693334
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 6691520 to SCN 6693334
ORA-38761: redo log sequence 124 in thread 1, incarnation 13 could not be accessed

In this case . . .

. . . on the new standby DB (former primary DB - before the failover),
use rman by connecting to the local DB and using the control file (NOCATALOG):
rman target=/ NOCATALOG

or (if baseenv from Triivadis is available - with history)

rmanh target=/ NOCATALOG

Since this command using the SCN did not return any archivelogs . . .
RMAN> LIST ARCHIVELOG SCN BETWEEN 6691520 AND 6693334;

. . . I searched for the backup of the archivelog with sequence 124 by guessing the time range . . .
LIST BACKUP OF ARCHIVELOG
      FROM TIME "TO_DATE('2018.05.01 20:00:00','YYYY.MM.DD HH24:MI:SS')"
      UNTIL TIME "TO_DATE('2018.05.02 12:00:00','YYYY.MM.DD HH24:MI:SS')";

. . . and was lucky and got the SCN of the desired archivelog (SCN 6691790) (but for restoring I choose the Low SCN of the previous archivelog i.e. 6689656 )
BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ -------------------
343     60.00M     SBT_TAPE    00:00:52     2018-05-02 10:57:31
        BP Key: 343   Status: AVAILABLE  Compressed: NO  Tag: TAG20180502T105639
        Handle: 6446830_C0200Z01_c9t1sib7_1_1   Media: V_3683633_19130778

  List of Archived Logs in backup set 343
  Thrd Seq     Low SCN    Low Time            Next SCN   Next Time
  ---- ------- ---------- ------------------- ---------- ---------
. . .
  1    122     6687465    2018-05-02 10:27:49 6689656    2018-05-02 10:42:47
  1    123     6689656    2018-05-02 10:42:47 6691790    2018-05-02 10:56:38
  1    124     6691790    2018-05-02 10:56:38 6691798    2018-05-02 10:56:38


HINT
It can happen, that the needed archivelog, belongs to another incarnation of the DB.
(If this is the case - on Oracle 12c - you can use the incarnation number with the rman restore statement.)
This statement list your incarnations.
RMAN> list incarnation;

List of Database Incarnations
DB Key  Inc Key DB Name  DB ID            STATUS  Reset SCN  Reset Time
------- ------- -------- ---------------- --- ---------- ----------
1       1       DB200Z01 2998214488       PARENT  1          2018-04-20 08:00:24
2       2       DB200Z01 2998214488       PARENT  2449245    2018-04-23 07:56:18
. . .
11      11      DB200Z01 2998214488       PARENT  5928815    2018-05-01 09:13:12
12      12      DB200Z01 2998214488       PARENT  6070931    2018-05-01 13:49:26
13      13      DB200Z01 2998214488       CURRENT 6179737    2018-05-01 14:47:30
==> Incarnation 13 is current (I do not need to use it with the restore statement)

Performing the restore of the archivelog (adapt the PARMS= for your needs):
RUN {
  ALLOCATE CHANNEL ch1 TYPE 'sbt_tape' PARMS="SBT_LIBRARY=/opt/hds/Base/libobk.so";
  # ALLOCATE CHANNEL ch1 TYPE disk;
  #
  RESTORE ARCHIVELOG FROM SCN 6689656;  # INCARNATION 13;
}

Output
allocated channel: ch1
channel ch1: SID=778 device type=SBT_TAPE
channel ch1: CommVault Systems for Oracle: Version 11.0.0(BUILD80)

Starting restore at 2018-05-02 13:15:52

archived log for thread 1 with sequence 125 is already on disk as file +U02/DB200Z01_HSB1/ARCHIVELOG/2018_05_02/thread_1_seq_125.759.975063515
. . .
archived log for thread 1 with sequence 133 is already on disk as file +U02/DB200Z01_HSB1/ARCHIVELOG/2018_05_02/thread_1_seq_133.741.975064009
channel ch1: starting archived log restore to default destination
channel ch1: restoring archived log
archived log thread=1 sequence=123
channel ch1: restoring archived log
archived log thread=1 sequence=124
channel ch1: reading from backup piece 6446830_DB200Z01_c9t1sib7_1_1

Now the standby DB should by able to automatically perform a flashback.

I hope it helps.

[ ]