Validating a timestamp

The Forensic Browser for SQLite incorporates features such that you can right click on a numeric date column and have the Browser convert a number to one of the supported date formats, applying a timezone offset as required.

The process is simply – right-click the required column and choose “View column as…”.

Select the date format that you believe the column is saved as, in this case I recognise this as an IOS NSDate format:

And it’s that simple…

So how can we validate the conversion done by the Forensic Browser? The method I choose is to use the SQLite built in functions within the Forensic Browser, as follows.

We can do this by simply having two copies of the column that we are converting within the same query an dthe using the SQLite DateTime fucntion to convert and display the date, the SQL is as follows:

SELECT ZKIKMESSAGE.ZTIMESTAMP,
DateTime(ZKIKMESSAGE.ZTIMESTAMP + 978307200, ‘unixepoch’) AS SQLiteConvertedTime,
ZKIKMESSAGE.ZBODY
FROM ZKIKMESSAGE

* The first line selects the ZTIMESTAMP column (and we apply an NSDate conversion using the Browsers built in features as above)
* The second line selects the ZTIMESTAMP column again, does some internal SQLite stuff – more shortly) and displyas it in a column with the alias SQLiteConvertedTime
* the third column adds the ZBODY field just so we can reference teach row to the screen shot above
* and finally the fourth row specifies the table we are using

The results of this query are as below:

As you can see the timestamps are the same and the two date columns compare as expected. The beauty of this method is that you do not need to leave the Browser but you are directly calling a function (DateTime) from the SQLite3.dll and thus bypassing the Browser for an independant date validation.

So how does it work?

The Browser takes the numeric value from the ZTIMESTAMP column adds 978307200 to it and then tells SQLite to convert it to a human-readable string and treating the new number as a unixepoch date.

DateTime(ZKIKMESSAGE.ZTIMESTAMP + 978307200, ‘unixepoch’) 

The NSDate format records the number of seconds since 1/1/2001 and the unix date format records the number of seconds since 1/1/1970. 978307200 is the number of seconds between the two dates (often referred to as the Delta), this figure is added to adjust the NSDate value to a unix value.

The relevant web page for the DateTime function is as follows:

https://www.sqlite.org/lang_datefunc.html 

Similarly, other supported date formats can be validated as follows:

Unix 10 digit dates

The number of seconds since 01/01/1970

Unix dates can be validated simply by using the date “as is”, i.e. there is no need to apply a delta

The SQL query is:

DateTime(numeric_time, ‘unixepoch’) AS Converted,

Unix milliseconds (13 digit) dates

The number of milliseconds since 01/01/1970

as above but convert to seconds by dividing by 1,000

The SQL query is:

DateTime((numeric_time / 1000, ‘unixepoch’) AS Converted,

Unix nano second and 100 nano second dates

These are the same as above but use 1,000,000,000 and 10,000,000 respectively as the divisor.

Chrome/webkit time

This is the number of microsecond intervals since 01/01/1601

The Delta (difference) between 01/01/1601 and 01/01/1970 is 11644473600 seconds, so first convert the Chrome date from microseconds to seconds by dividing by 1,000,000 then just take away the delta (we take it away because the chrome epoch date [01/01/1601] is older than the unix epoch)

The SQL query is:

DateTime((numeric_time / 1000000) – 11644473600, ‘unixepoch’) AS Converted

Windows 64 bit filetime

This is very similar to the Chrome date except the interval is the number of 100 nanoseconds since 01/01/1601. Therefore instead of dividing by 1,000,000, we need to divide by 10,000,000

The SQL query is:

DateTime((numeric_time / 10000000) – 11644473600, ‘unixepoch’) AS Converted

NSDate (IOS)

Records the number of seconds since 01/01/2001 

The SQL query is:

DateTime(numeric_time + 978307200, ‘unixepoch’) AS Converted

Mac

Records the number of seconds since 01/01/1904 

The SQL query is:

DateTime(numeric_time – 2082844800, ‘unixepoch’) AS Converted

OLE Automation

Records the number of days and fractions of a day since 30/12/1899

Using an “example date” of 42439.766146 the query we want is:

SELECT DateTime(42439.766146 * 86400 – 2209161600, ‘unixepoch’) AS Converted

i.e. convert the fractional day portion into seconds by multiplying by the seconds in a day.

Forensic Browser for SQLite – Creating a custom report

Creating a custom report on the Kik messenger database

In this article, I want to take you through the process of creating a custom, but simple, report on a Kik messenger database step by step. As we work through the process we will choose which columns we think will be useful in our report and modify our report by creating simple visual SQL joins on different tables.

The first step when looking at a new database is to have a quick look at all of the tables to get a feel for the data that’s in them and see how we might start building our report. I use the “summary tables” tab to quickly review each table, in turn, there are three tables that seemed to immediate deserve some further attention:

XKIKMESSAGE – Contains the message text and some timestamps but does not contain a user name, it does, however, have a ZUSER column which would seem to be a user number.

There is also a ZKIKUSER table with user details, this almost certainly links to the ZKIKMESSAGE table and Z_PK (Primary Key) is almost certainly the user number referred to above.

Finally there is a ZKIKATTACHMENT table again with a timestamp column and a content column that could do with some further investigation

The first query is a visual query linking the ZKIKMESSAGE table with the ZKIKUSER table so that we can attribute a user name to each message. I won’t go into the mechanics of all the different types of JOINS available in SQLite here (there is an article dealing with this on my web site), I will just say that as we want a join with every row in the messages table matching up with a row from the users table, we need a LEFT JOIN (the Forensic Browser default). I’ll also now start to select the different columns that I may want in my final report (I can always change this later). In the animated gif below you can see the process and watch the SQL commands being built for you:

Because I like to start to tidy my reports up early I can now choose to display the two dates in a more meaningful format. In this table in Kik the dates are Mac Absolute format so I choose that format for all values in a column by right clicking on the relevant column and selecting the “display as” menu option. I can then choose the display format and timezone (I’ll set this to Mountain Standard Time).

So what about attachments? The first thing to do is to JOIN the attachments table in the same way as we did the users table. From the summary tables above we can see that the best join candidate is to use the Z_PK (primary key) of the messages table and the ZMESSAGE field of the attachment table. We will again use a LEFT JOIN as we want all the rows from the messages table and just the matching values from the ZKIKATTACHMENT table (where there is no match any new columns will be blank). I have chosen to include the attachment timestamp and the content columns form this table. Again we can see the SQL commands being built and starting to look quite complex although we have not typed a single line of SQL code.

This report gives us a column with a date for any attachment and a column called “content”. Further research shows that strings in the content filed are file names and that these files are binary Plists that contain jpgs (the attachments). How these Plists are decoded is beyond the scope of this article (there is another on my web site that deals with these), however, there is a Forensic Browser extension callable from the extensions menu that decodes these Plists and imports the jpgs within each Plist into a new table within the Browser case file. It is then a simple matter to make a final JOIN on this new table to include these pictures in the report. I finally add “is not null” to the column holding the picture to display just those messages that related to an attachment.

The animated gif below shows this process: the extension is run, a JOIN is made on the new table, a report is executed and then modified to show only those records that refer to attachments.

Forensic Browser – deleted records, journals, pictures and filtering

In this article, I want to cover a few of the areas where the Forensic Browser for SQLite provides features that are missing in other browsers or where it complements other more generic forensic software by providing features that are specific to general databases rather than specific ones. The Browser does this by providing a Visual Query Building environment (drag and drop SQL query generation) allowing the creation of very powerful and customized reports often without typing a single character.

It is worth noting here that most forensic software that creates a nice ‘canned’ report on an application only displays those tables and columns that the developer deems important. For instance, the Skype contacts table at the last count contained 97 columns and the messages table has 36 columns. While these reports usually contain all of the relevant data there can often be additional very useful and relevant data held in columns that do not form part of the generic report.

Additionally, database developers are prone to changing the schema of a database without notifying anyone; this may break your forensic application or may introduce relevant data in a new column. Database schemas also often vary between platforms, with a different schema for, say, Kik on Android than on IOS and different schemas might mean the best report on one platform differs from another.

The areas I will cover, with examples and screenshots, are:

  • Handling live, deleted and partial records in databases and journals
  • Creating custom reports showing a subset of users within a specific timeframe
  • Displaying pictures (from blobs, or external to a database) within your report

Handling live, deleted and partial records in databases and journals

Many SQLite applications allow the user to delete records as part of their operation and databases by their nature are often dynamic with new records being added and pages of B-Trees being moved to maintain a valid B-Tree structure. Pages (possibly containing live and deleted records) are often copied to rollback journals or in the case of the newer Write Ahead Logging journal, the new pages are written to the journal and the old page containing redundant data is left in the database.

All this means that if records have been deleted and/or a journal is present then the deleted records need to be found and the journal processed so that we can see and report on both the live and any deleted data.

Extraction of records that may have been deleted and partial records (see the article on my website that covers this in more detail) is straight forward with the Forensic Browser, as is processing any associated journals (both the old rollback journal and the newer WAL journal). You just need to choose your source database and when prompted select the options that you want.

If you choose to recovery records then a copy of the source database will be made and all of the live records and recovered records (as well as any from journals) will be added to this working database.

Creating a query to show the content of the table can be done by just dragging a table to the visual query designer window and check to mark which fields you want in your report. The SQL is generated automatically for you. Drag the mouse between columns in different tables to create simple or complex joins – all visually:

Filtering to show just a subset of the recovered records is straight forward, in the animation below I have clicked on the filter icon in the column header and I am choosing to create a report containing just selected recovered records from the live and journal file associated with this Skype database:

Creating custom reports showing a subset of users within a specific timeframe

After selecting just the records we want, from the source we want a common requirement is to restrict the report to one that contains messages from specified users and just within a given timeframe.

Again this is straight forward and in the same manner, as we selected the records from the journal we can add a further filter on the from_dispname column and just choose selected users from the Skype database:

Then by right clicking on a numerical timestamp column we can apply one of the built in date and time conversions, selecting the appropriate display format for the column and any timezone offset we need:
Filtering records to just those within specified dates can be done by clicking the filter icon in the column header and entering the dates in the filter dialogue:
Building up filters in this granular manner is very intuitive and using the date conversion routines allows us to work with dates in formats we understand rather than the underlying (in this case) raw Unix date.

Displaying pictures (from blobs, or external to a database) within your report

Many databases maintain pictures such as avatar pictures (Skype) and message attachments (WhatsApp) some forensic applications will display these pictures alongside the appropriate data but most SQLite browsers are not designed for this. Many applications however store pictures outside of the database, Blackberry messenger stores attachments as individual jpgs in the devices file system, some versions of Kik messenger store the attachments embedded within individual binary plists stored on the devices file system.

Irrespective of the method used the Forensic Browser is able to display these pictures alongside the message to which they relate. Displaying a blob as an image is trivial in the Forensic Browser, either choose to display all blobs as pictures or right click on a column and choose to display just that column as a picture:

The screenshot below shows an HTML report containing various fields including the avatar images from a test Skype contacts table:
Importing images that reside in the normal file system is also straight forward. In blackberry messenger, the file transfers table contains a column FileTransferId that has a number that is the name of a jpg located in the devices file system. The Forensic Browser provides a simple import dialogue that allows the user to specify where the pictures are located so that they can be imported into the case file.
It is then very straight forward to create a simple query that displays the images within the table.

For more complex import scenarios such as Kik messenger where the pictures are stored external to the database in binary plists, Browser extensions can be written to perform the import task. See the article re Kik messenger pictures on my web site.

Recovering deleted records from an SQLite database

In this article, I want to discuss how we can recover deleted records from an SQLite database, or rather how we can recover all records and distinguish between those that are live in the DB and those that are found in unused areas and do not match a live record. I will also show how the first few bytes of records are regularly overwritten by SQLite structures and how these partial records can be recovered.

Before I can discuss how we do this, it’s quite straight forward with SQLite Forensic Recovery, I need to take you briefly through a slightly simplified structure of a database explaining how the database fits together and how records are stored within the ‘pages’ of the database. At that point, I can give a basic overview of the algorithm used to recover the non-live records which will give you, as the investigator, a handle on how much confidence you can ascribe to one of these records.

I then show you how the deleted records and partial deleted records look when you open the database in The Forensic Browser for SQLite.

Database structure

SQLite databases are made up of a number of pages of a fixed number of bytes, either 1024, 2048, 4096, 8192, 16384, 32768 or 65536. The first 100 bytes of the first page contain a header the remainder of this page follows the same structure as every other page (although the ‘data’ area is reduced by 100 bytes).

A database is made up of a number of tables and indexes (indexes are just a special sort of table) and each table/index uses one or more pages. A page can only be used for one table/index i.e. you will not find live records from different tables in the same page. Each table or index is stored as a B-Tree (Balanced tree) with each B-Tree occupying one or more pages.

There is a master table (sqlite_master) that resides (has its root) in the first page of the database and may overflow into additional pages. The master table tells us where the root of every user created table and index can be found.

The schema for the sqlite_master table is

CREATE TABLE sqlite_master(
type text,
name text,
tbl_name text,
rootpage integer,
sql text
)

A typical entry for a row in this table follows (in csv form), this is the DbMeta table from a Skype database, we can see the SQL CREATE statement for the table in the SQL column and we can see that the rootpage for this table is at page 2.

“table”,”DbMeta”,”DbMeta”,”2″,”CREATE TABLE DbMeta (key TEXT NOT NULL PRIMARY KEY, value TEXT)”

So to find out where the root page of any table or index is located we just need to read the sqlite_master table at page 1 and locate the entry for the table/index and get the location for the root (rootpage). The first few rows of the sqlite_master table for a Skype database are shown below:

SQLite Pages within a B-tree are classified as either internal or leaf pages, internal pages contain pointers to other internal pages or to leaf pages. Leaf pages contain the data for the table. Typically a table will comprise one root page (it must always have exactly one root page) which points to a number of leaf pages. Larger tables/indexes contain a root page that points to internal pages which in turn point either to more internal pages or to leaf pages. In very small tables the root page can double as a leaf page, essentially a root page is either an internal page or a leaf page without a parent.
As tables are appended to, records are modified or records are deleted the number of pages used by a table can grow or shrink and records can be moved between pages. A list of pages that are not currently in use is held in the freelist, the freelist itself is stored as one or more pages.

Now on to how a page is constructed. Although a page has a structure it is a little more free-form and slightly more complex to describe. There are five main types of pages – leaf and internal pages for both tables and indexes and overflow pages for entries that cannot fit into a single page (Binary Large Objects (BLOBs) such as pictures).

A page starts with an 8 or 12-byte header (varies depending on the type of page – 12 bytes for interior pages and 8 bytes for leaf pages). This is followed immediately by the variable length cell pointer array. Each entry in the cell pointer array is an offset to (points to) the start of a record (a row from the table). The size of the cell pointer array is recorded in the cell header.

The following graphic shows the structure, a header followed by a cell pointer array with each element in the array pointing to a record. The records are variable length and can be interspersed with areas of unused space.

We can see from the above that there is unused space within a database page which we need to consider when carving records along with those pages that are marked as completely unused.

However, before we carve it would be useful to have a look of the structure or a record/row within a table to see how each column is stored and the limitations this imposes on us when carving deleted or partial records.

It is worth noting at this point that SQLite Forensic Explorer allows a user to step through each and every byte in a database page, decoding the various structures. Some of the screenshots that follow have been created using this program.

So how is a record stored on a disk, A record is usually stored as a contiguous series of bytes, a header which holds a few different lengths (and almost invariably holds the rowid) followed by the body of payload.

The payload is split into two portions – a serial type, one for each column in the row, followed immediately by the data for each column appended back to back, The following diagram may help:

In the diagram above the record structure is for a table of the following form (it’s not quite correct but it makes the explanation easier):

CREATE TABLE test (id int, name text, age int, initials text, gender text, state text, country text)

The first thing to note is that there are 7 columns in the table definition but only 6 in our diagram. When SQLite stores a row of data it only stores columns up until the last column that contains data, any other columns (containing NUL) are ignored to save space. So by the lack of a record for the last column (country), we can conclude that in this record the country field is NUL.

The data for the first column is an int and this integer value is stored immediately following the last serial type (txt7).

The second column is an 8 character text column and the data follows the integer in the previous column, column 1,

The third column, an int, is stored following column 2,

Column 4, a three character text column, follows column 3,

Column 5 is NUL, there is a serial type for this column but NO DATA is stored,

Column 6 is a 7 character text column and the data for this column now immediately follows column 4.

We can see from the above that, in simple terms, the record structure can be sparse at two levels, once there is no more data to store at the end of a row then all column serial types (and naturally no column data) is stored in the table. If column is nul but is followed by further populated columns then the serial type is recorded as null but, logically, no data is associated with it.

From the above you can see that there are two types of unallocated records:

  1. Those in unused pages where there is a pointer from the cell pointer array to the page
  2. Those in unused space that we can match to one of the tables

Freeblock Records

There is a final consideration when dealing with deleted records, the freeblock structure.

SQLite maintains a linked list of freeblocks in a page. A free block is a block of free space that follows the first allocated record.

In the graphic below the record header points to the offset of the first freeblock (this entry will be 0 if there are no freeblocks), i.e. offset 753. The first two bytes at this offset are a 16-bit integer that point to the next freeblock 0x0379 = offset 889, the second two bytes are the size of this free block 0x0026 = 98 bytes.

The second freeblock has the first two bytes = 0x0000 this signifies that this is the last freeblock in the database page. This freeblock is 0x0024 bytes in length i.e. 36 bytes.

When a record is deleted (or moved to another page) and the space allocated is freed up then a freeblock is written to the page and the freeblock list updated accordingly. If we look at the record at offset 889 before it was deleted and the record afterwards then we can see the effect this has on the deleted record

In the combined screenshots from The Forensic Explorer for SQLite below the upper image shows the record before it was deleted along with a breakdown of the record structure (to the right). The lower image shows the freeblock and how the first four bytes of the record have been overwritten with 0x00000024:

There is, of course, a need to be able to forensically carve all three types of deleted record. The Forensic Toolkit for SQLite can do this

Recovering deleted data from databases

Now we need to look, in simplified form, at the process used when carving data from a deleted page, i.e. a page where the cell pointer array is intact and each entry in the cell pointer array ‘points’ to a row of data from a table.
The initial process is simple, identify a leaf page and for each entry in the cell pointer array read the associated record/row.
In the screenshot below the centre panel has a complete record highlighted in yellow, cell pointer array at offset 8 (0x00D4 = 212) points to the start of this record. The record has 6 bytes (offset 216) that comprise the record serial types and these serial types are:

  • Txt5
  • Txt11
  • Txt11
  • Int
  • Txt99

All that is required now is that we find a matching table definition that has the form txt, txt, txt, int, txt. If you have been paying attention you will recall that the sqlite_master table has the form:

CREATE TABLE sqlite_master(
type text,
name text,
tbl_name text,
rootpage integer,
sql text
)

This matches nicely with the record structure we have recovered so we can insert the data associated with each of these serial types into a new (recovered) sqlite_master table.

If we now look at a more complex example from a Kik database, in the screenshot below, we can see that this record has 6 bytes of record serial types (record keys). We can also see that the serial types are:

  • NULL
  • Txt36
  • NULL
  • Int
  • Txt41
  • NULL
So how do we match up this carved record with the table to which it belongs? Simply put we have to match the NULL columns with any column. An examination of the schema for the Kik database shows that the following table matches NULL, Txt, NULL, Int, Txt, NULL.

CREATE TABLE KIKContentURITable (
_id INTEGER PRIMARY KEY AUTOINCREMENT,
content_id VARCHAR,
platform VARCHAR,
priority INT,
content_uri VARCHAR,
type VARCHAR)

Using the forensic Browser to carve deleted records from an Existing SQLite database

From version 2.0.0 the Forensic Browser can automatically identify and carve all three types of deleted record from an SQLite database.

Simply create a case file and select a source database. The Browser will then ask whether you want to identify deleted records and if so will create a new database with some changes to allow for possible duplicate records and older, different, versions of a record that might have the same primary key as an existing record.

You can also select whether to identify records (including partial records) in any associated journal or WAL files.

The Forensic Browser will now automatically attempt to identify deleted records and populate a new table with all the records, existing (live), recovered and partial.

Recovered records are those where the header is intact and all columns from the table should be present (unless records overflow into another database page)

Partial records are those where the row header has been overwritten, in this case, there will be some missing data and the Forensic Browser will attempt to reconstruct this data.

The Browser also creates an MD5 hash of the content of each column in a row to help identify duplicate records, the Browser will not insert a recovered record into the table if a live record with the same hash exists.

The following animated gif shows the full procedure for recovering deleted records from an IOS mmssms.db file and creating a simple report showing converted date and time along with the message details and the status (live/recovered/partial) and filtering on just the recovered records:

Forensic examination of SQLite Write Ahead Log (WAL) files

I am sure that you are aware that when an SQLite database is opened if there is an associated WAL file then the pages in this WAL are automatically written to the main database, thus overwriting records, and the WAL file is reset. You may not be aware though that the WAL can contain multiple copies of the same page (each with different data/records) and that there can also be a sort of WAL “slack” i.e. records from a previous database transaction if you like records from previous WAL files. So by opening the database and committing the WAL you are potentially overwriting/missing valuable evidence.

This article describes how WAL files work and how to deal with them forensically – the steps are very straight forward with the Forensic Toolkit for SQLite and the article takes you through them. I will go into a little detail regarding the format and usage of a WAL file, some of the forensic implications of recovering data and present two methods for recovering the data without missing or overwriting existing records.

WAL file overview

In order to maintain database integrity, SQLite often makes use of WAL files. At a very basic level, WAL files are easy enough to incorporate into a parent SQLite database – you just need to open the parent database in SQLite. WAL files are a form of cache whereby data that is written to an SQLite db is first written to the WAL file (when this is enabled) and then at a later time (known as a checkpoint) the SQLite data is written into the main database. So from a forensic viewpoint, the existing database is an older version of the data and when the WAL is checkpointed you see the current version.

So to incorporate a WAL file just open the main db in SQLite. SQLite will see the existing WAL file in the same folder as the main database and will assume that an error has occurred (application/computer crash) and will automatically checkpoint the WAL file and add the changes to the main database.

SQLite writes a value known as a Salt to the WAL file header, this salt is also written to each frame header for each page in the WAL file (the page header and page data itself are known as a frame).

So if pages 3, 7, 9, 32, 4 are updated (in that order) and the salt is 1234 then the WAL file will look as follows.

What happens if a record is updated multiple times? In this case, the complete page (WAL files always work at a page level) that contains the changed record will be written to the database multiple times, and multiple copies will exist of this page. When a checkpoint occurs each page will be written back to the main database in the same order that they were written to the WAL file.

So in the same checkpoint if pages 7, 6, 7 are updated (in that order) the salt is still 1234 then the WAL file will look like this:

So a WAL file can contain multiple differing copies of a page.

A checkpoint occurs when the WAL files reach 1000 pages (this is configurable in SQLite) or when the programmer forces one. At this point, each page will be written back to the main database in order so only the last version of page 7 in our example would be seen by an investigator using SQLite to write a WAL file.

It gets more interesting though. When a checkpoint occurs then the WAL file is not deleted, rather the next set of pages are written from the start of the WAL file.

SQLite determines what is current by means of the salt value and to do this it ensures that the salt value changes for each checkpoint.

Our example below shows our WAL file after a checkpoint has occurred and then three new pages have been updated – pages 5, 7 and 2. This new WAL file has the new salt (6789) written to the header and to each new frame header. So when SQLite writes the content of a WAL file back to the database, it first reads the salt from the WAL header and checks each frame in the WAL file to make sure that the salt matches – when it reaches a salt that doesn’t match it stops writing data. This means that any frame in the WAL file that has a different salt will never be written back to the main database (essentially this has already happened at a previous checkpoint). This data could of course be forensically valuable.

It can be seen from the image below that the WAL file contains 5 pages from the previous checkpoint, without tools such as SQLite Forensic Explorer there is no way to look at the contents of these pages in “WAL Unused”.

So we have in effect unallocated space in a WAL file that contains records from the previous checksum.

So how can we examine a WAL file?

Forensically the normal method of looking a at WAL file is to take advantage of the fact that when an SQLite database is opened if a WAL file is present then the content of the WAL file will be written to the main databases (i.e. it is checkpointed). There are a number of problems with this approach.

1. Old data in the main database file is overwritten by the data from the WAL file potentially losing evidentially valuable data
2. The WAL file may contain more than one version of a page
3. The WAL file could contain deleted data in unused space
4. WAL files can contain data that has previously been written (checkpointed) to a database and may subsequently have been overwritten

It follows therefore that a better approach could reveal evidence that might otherwise be lost – There are two methods:

1. Use Forensic Recovery for SQLite to carve all the records from the WAL file and insert them into a new database
2. Use SQLite Forensic Explorer to look at the WAL file page by page.

I will discuss both methods:

SQLite Forensic Recovery and Explorer are only available as part of the SQLite Forensic Toolkit, more information here.

Forensic Recovery for SQLite

This solution is the simplest one, Use Forensic Recovery for SQLite to carve directly from the WAL and create a new database with just the records from the WAL. All we need to do is ensure that a template is created for the Database, this can be done by just pointing Forensic Recovery for SQLite at the existing database (the DB associated with the WAL) when following the wizard. Forensic Recovery for SQLite will display tables as per the original schema (with constraints removed so duplicate records are allowed) showing all of the records, it also creates a new SQLite database that we can examine with the Forensic Browser for SQLite. The steps are:

  • Run Forensic Recovery for SQLite and work through the wizard
  • When you get to the Source file page, select the WAL file you want to carve from (I am carving from a Firefox places WAL file)
  • When you get to the templates page, “right click” the “Apps table” and choose “import tables from sqlite databases”
  • Navigate to the Database (in my case filefox places.sqlite file and select it
  • Call your template something like “Firefox – places”
  • Make sure just the new template that you create (Firefox – places in my example) is check marked (you don’t want tabs to be created for all the other applications)
  • Uncheck the “Search for database definitions” box – there is no need as we are carving from a known journal

Finally select the output folder for the carved database and hit “Finish”

Once the carve process has completed, a few seconds, you can view each table in turn and review the carved records. Note that the recovered records in this table are logged as “mapped” rather than “Live” as they are not currently live in the associated database (the WAL has not be written to)

Double-clicking on any entry above will take you to a table showing exactly where (byte offset) in the WAL file the recovered record was located.

Of course, you can also use The Forensic Browser for SQLite to create custom reports on the recovered data from the WAL file, you can even open the recovered WAL file and the original database and create a compound report based on the content of both

SQLite Forensic Explorer

While SQLite Forensic Explorer can read and decode pages and records in pages from a WAL file individually, a better solution is obtained if the table schemas from the database that is associated with the WAL file are available to SQLite Forensic Explorer. When this additional information is available the records from WAL file pages can be correctly decoded and presented to the examiner in easier to read grid form.

The process of investigating an SQLite WAL files with SQLite Forensic Explorer is simple.

1. Select “Explore database/WAL file” from the main menu and choose a WAL file

2. SQLite Forensic Explorer will then prompt for a SQLite database to associate with the WAL file

3. Start your investigation.

The first page the investigator will see is the 32 byte WAL file header. The information of most value to an investigator here is the Page size and the WAL salt. Every additional page in the WAL is will be page size (in this case 32K) plus 24 bytes in length (the 24 bytes are referred to as WAL frame headers), the salt values from the WAL file header are repeated in every frame header.

Subsequent pages within the WAL file can be accessed using the centre (decoded records) displays context menu or the associated hotkeys:
Each page is decoded as for a normal SQLite database with the addition of the SQLite WAL frame header.
If a salt does not match the SQLite Forensic Explorer highlights this, so it’s easy to see a page that is not from the current cache.

Each record is displayed in decoded form in the records display (shown above) and selecting the start of the payload in the hex display pops up a dialogue showing the record inserted into its matching table (obtained from the schema from the associated database) and the record can now be written to a copy database for further examination using additional software such as SkypeAlyzer.

References
http://www.sqlite.org/draft/wal.html
http://www.sqlite.org/draft/atomiccommit.html

Investigating Skype cloud-based media_cache/image sharing with the Forensic Browser for SQLite

Skype recently introduced cloud based operation and started moving away from peer-to-peer messaging with a view, to paraphrase Skype, of improving the service that we receive.

Without going into the pros and cons of this, from a forensic point of view it is irrelevant anyway, the move has had the effect of introducing a new set of artefacts and in particular a new location for stored/cached image files (pictures).

More information here: https://support.skype.com/en/faq/FA1…t-is-the-cloud

This article deals with the SQLite tables that reference to these pictures, the locations of the pictures themselves and how to join the relevant tables, decode the data held in certain blob fields and create a report showing who sent what to whom including the pictorial evidence where possible.

At the end of the article, I will have shown how the different tables fit together and will provide a Browser extension that will create the necessary tables and import the cached pictures; you will be able to run a report that shows who sent an image and when. Alongside this, it will display the original image (if sent from the machine we are investigating) and will display the cached image. From the information, if the sender is the owner of the machine we are investigating, we will be able to see if the image was sent from this machine or was sent from another device and synced with this machine. In certain cases, we will be able to see the original path on a remote users machine (i.e. when someone sends an image to us) and therefore potentially glean information re the remote users operating system.

Note: This article was prepared after looking at a small test set of Skype installations on Windows 7 and 8 PCs, as such the details within may need to be revised at a later date when more information comes to light.

While this article is quite lengthy and a little technical it is important to realise that to use the Forensic Browser for SQLite (part of the Forensic Toolkit for SQLite) to examine the Skype media cache you don’t need to understand SQL, all you need to be able to do is to apply it, and this can be done in just a few short steps that will be summarised at the end.

This particular investigation started off when Jimmy Weg from Montana DCI contacted me and asked if I knew anything about the Skype media cache. He said that the files in the cache were created when a user/Skype synced between devices and he wanted to know if there was a way to determine the sender and recipient of the files.

At the end of this process you will be able to run a simple script that prompts you for the relevant file locations and that then creates the necessary queries such that you can run an installed report in the Forensic Browser for SQLite that looks as below and can be exported directly as a HTML report. No knowledge of SQL is required by the user.

So naturally the first thing I did was to look at the media_cache folder on my machine and see what was present. This folder on my machine is at the following path:

C:\Users\Paul\AppData\Roaming\Skype\r3scue193\medi a_messaging\media_cache

The content, as can be seen below, were cached image files with some odd naming conventions and associated files that looked from their names like they might be thumbnail images.

A quick review of the content of the files confirms this (at a larger scale the duplicate images do include a thumbnail):
What is interesting here is that I can see pictures that I had received, pictures that I had sent and also pictures that I had sent from another device. So clearly from an investigation point of view, this is very interesting. What other information was present?

A root around in the asyncdb subfolder of media_cache shows a cache_db.db file that on examination is unsurprisingly an SQLite database. This database contains just one table “assets” the content of which is shown below in the Forensic Browser for SQLite.

The access_time field records the 100 nanosecond intervals since 1/1/970 and the Forensic Browser for SQLite can decode this (and apply a timezone offset should I desire) for me. The serialized_data field is a Binary Large OBject (BLOB) and contains what appears to be the file name from the cache (more on this later), blobs are displayed as hex by default in the Browser.

While this looks interesting it doesn’t help us with our “who sent what to whom” question, so I needed to look further afield. Knowing that Skype maintains lots of useful data in its main database (main.db), I decided that my next step would be to see if I could link the media_cache cache_db.db with a table in main.db.

The Forensic Browser allows me to add additional databases (attach them) to the query designer and then perform cross-database queries, so I attached main.db to the Browser and started looking through the tables.

One of these tables jumped straight out at me, not least because I recognized the name of my bike (a Capra) and the picture I had taken at the Falmouth Tall Ships event last year, pictures of both appear above.

Importantly I recognize all of the pictures and that the difference between the two groups is that those with a storage_document_id are pictures I sent and the others are pictures I received.

We still need to show who the sender and receiver are, so back to the tables in main.db. I know that Skype often stores system status information in the messages table so the first thing I did was to look in the messages table at the approximate times recorded in the table above, this came up trumps. There were a number of records that were related to my previous query, these records all had a type ‘201’, so I was able to quickly build a visual query on just type 201 records from the messages table. You can see the rows in the original_name column above appear in the screenshot below embedded in the body_xml column:

It would seem logical to join these two tables (messages and MediaDocuments), but we need to work out how to perform our join, i.e. on what columns. It is clear that there are no exact matches where we can say col1 = col2 so we need to engineer one. We do this by extracting and matching the URI from the body_xml field in the messages table with the URI field in the MediaDocuments table using an SQLite core function “instr”. Some of the matching rows are highlighted below:

You can find more information on the SQLite core functions here:

https://www.sqlite.org/lang_corefunc.html

We can now join the messages table to the MediaDocuments table and The MediaDocuments table to the assets table. The only thing that remains to be done is to import the original images if they exist from the original folders (original_name) and the cached images from the media_cache folder. While this can be done using the built-in functionality of the Forensic Browser as I am providing a Browser extension to create the joins on the different tables and extract the cached filenames from the serialized_data column it makes sense for me to also import the pictures in the extension. This means that you just need to run a single program and follow a few prompts to create your report. So all that needs to be done by you is to run the extension (if you are a Forensic Browser user and haven’t got a copy of this browser extension then please get in touch).

 

Running the Browser extension:

  • First, run the Browser and create a case file (you should create a case file whenever using extensions)
  • Open the main.db (I suggest that you do not create a working copy at this time)
  • Now run the Skype Media Cache parser extension from the Extensions menu.

Select run and choose the case file

This next step is optional and you can just hit cancel.

Choose the path to the root of your extracted data and choose an offset to ensure valid paths. In the dialogue below the first three “file names to find” are possible valid files from the local file system, we are investigating. When the file path (from character 3) is appended to the prefix then a valid path on the investigation machine is obtained – then the extension checks the file path for any existing matching files and shows them in the bottom memo. At this point (when valid file paths show in the bottom memo) you can select OK to continue.

When the Browser extension completes, hit “Close”, there will be two new SQL queries saved in the Query Manager as below:

The first query “Full Query” returns every row from the combined tables (as well as any pictures that were imported). The second query “Abbreviated Query” returns a subset of the main columns from the query. You are of course encouraged to modify these queries to get the report you would like.

The remaining three queries are the SQL for the VIEWS used by the two main report generating queries above. While a single compound query could be written it is a useful practice to break down complex queries in to smaller subqueries/views in order to simplify the problem.

An example of the output of the abbreviated query is shown below:

This output can be saved to HTML/XLSX etc. as with any other query.

What does it mean?

There are some excerpts from the results shown below that help explain what we are seeing. The main.db file and the extracted profile image are all from my office Windows 7 PC.

First off, note there are two rows for each sent picture, this is because the media_cache folder holds two pictures. One full size and one thumbnail for each transfer.

The first two rows show a picture that was sent by a colleague in Canada to me, the orginal_name column contains the name of the picture on his device. The author and from_dispname columns contain his skype user name and “friendly” name. The dialog_partner column is also populated with his name.

The second two rows show a picture that I sent to him from my Surface Pro PC. Note that the dialog_partner column is not populated but my colleagues name does appear in the chatname column. The original_name column contains the file name on the surface pro. My user name is correctly shown in the author and from_dispname columns.

Rows 5 and 6 show a file that was sent from my iPhone to a second Skype test account I have that was running on a different machine (another laptop). Note the original_name column is empty, this may be because one or both devices does not support the new photo sharing functionality at this time.

Finally in this screenshot the bottom two rows show a picture sent from this PC (Windows 7 desktop) to another colleague Gary, in this case the original_name field contains the fully qualified path of the original picture on the Windows 7 PC, however the last two columns (original_filename and original_image) are not populated because the original picture has since been deleted – although helpfully Skype has maintained a cached copy for us.

In the screenshot below the two rows show a picture that was sent by me to my colleague Gary from my Windows 7 PC, in this case the original file still resides in the original “My Pictures” folder (note the fully qualified pathname) and as it is still present the browser extension has imported the original picture and displayed it in the query/report as well as the cached picture.
To simplify the SQL I make use of SQL VIEWS, views are akin to a virtual table and you can create a view that represents a complex SQL query and then just refer to that query by its view name in future.

For instance, I replace the following query with the view name “Messages201”

SELECT Messages.”timestamp”,
Messages.chatname,
Messages.author,
Messages.from_dispname,
Messages.dialog_partner
FROM Messages
WHERE Messages.type = 201

I can then use either the full query or just “SELECT * FROM messages201” to get the same results. The three VIEWS I create are available for use by the Forensic Browser user as follows:

In the screenshot below on the Messages201 view there is a message with a type 201 that has no body_xml data. This is the remnants of a picture I sent to my Canadian colleague while he was offline and subsequently deleted before he had a chance to download it.
The next piece of this puzzle is the MediaDocuments table from main.db – this lists the original name of the file.
The actual image I sent is present in the assets table in the cache_db.db, the timestamp for this image shows the time at which I deleted the picture.
These three tables can be linked, after a fashion, by the dates and record ID’s.

The row in the messages201 view has an ID of 5743 (all records in the Skype main.db have a unique ID irrespective of what table they reside in), the record in MediaDocuments has an ID of 5742, i.e. one previous. The edited_timestamp in Messages201 is 2015/03/05 20:20:03 and the access_time in assets is 2015/03/06 20:20:04.

If any Forensic Browser users need help with any of the SQL referred to above or installed into the Query Manager by the browser extension (or indeed any SQL query at all) then please get in touch and I’ll do what I can to help.

Obtaining and displaying Skype IP addresses (with maps) from ChatSync files

I had reason recently to look at Skype ChatSync files to recover the IP addresses held within and I needed to get these into a report. For those of you that aren’t aware when Skype is syncing data between two different accounts, it uses ChatSync files to transfer this data. The data held within is, for the most part, duplicated in the main.db file (after all that is what the sync part of ChatSync refers to). However, and most interestingly for forensic purposes, usernames and IP addresses are also stored within these files.

I have therefore written a Forensic Browser for SQLite extension that parses the folder containing these files and for every file records the following information in a new SQLite database:

  • The filename
  • The last written date
  • The user names
  • And for each username, the Lan and Wan IP addresses that are stored

It struck me when writing this application that I could also obtain some location information from an online service and display this information within a Skype report and further I could use the built-in mapping functions of the Forensic Browser for SQLite to display maps related to the latitude and longitude fields obtained from my IP lookup service.

Of course, location information based on IP addresses needs to be carefully considered as IP addresses will often be the of a service provider. Nevertheless on examination of the IP addresses and particularly associated maps for my own Skype username quickly revealed some interesting locations.

The screenshot below shows the output of this process with three maps at different scales shown alongside the details from the ChatSync files.

The rest of the article will show how easy it is to create these reports yourself.

In order to follow these instructions you will need to obtain the ChatSync parser program from me (it needs a current Forensic Toolkit for SQLite licence) and if you want to obtain the location information you will need to have:

  1. A free account (and associated key) from IPInfoDB
  2. Internet connectivity when running the ChatSync parser program and Forensic Browser for SQLite.

So, first, visit IPInfoDb and create a free account at this page http://www.ipinfodb.com/register.php you need to provide an IP address of the “server used to connect to the API gateway” I used the IP address of my router (also conveniently displayed on the page above) and all seems to work OK. You need to acknowledge an email in the normal fashion and then to wait 10 minutes after the acknowledgement before you can use the service. When the service is created you will be provided with a long alphanumeric key – you will need this later.

You can now run ChatSync (make sure it is in the folder alongside The Forensic Browser for SQLite so that it can pick up the licence file). If you have obtained a key form IPInfoDB then select the checkbox and enter your key in the edit box (this will be retained in the registry for future use).
Click on the parse button and when prompted choose the Skype ChatSync folder you want to examine
and then again when prompted choose a database to write the results to.
When the parsing has completed, this may take a minute or two, you can open the saved database in the Forensic Browser for SQLite.
Now we have created and viewed the database in the Forensic Browser it is a simple process to use the built-in feature to create a new table containing three geolocated maps for each IP address.

Select “Create geolocated images” from the “Tools” menu

In the following dialog choose the table that contains the latitude and longitude information, in this case “decodedchatsync”. You also need to specify the key, lat and long fields, but these should be filled in for you in this instance. You will need to provide (or accept) the name for the output table.

The zoom levels specific the scale for each of the three created maps (0 disables a map) with 16 being the maximum “zoom in” level (i.e. street level) and 1 the minimum.

Press OK and the table will be created, this may take a few minutes as the maps are created and downloaded via the open street map server.

The resulting geodata1 table looks as follows:
However, in order to make sense of it, we want to display the maps alongside the data from the decodedchatsync table, to do this add the decodedchatsync table to the query designer and then the geodata1 table. Create a link by selecting the id field in the decodedchatsync table and dragging it to the id field in the geodata1 table. Finally, select all columns (*) from the decodedchatsync table and just the three maps from the geodata1 table. Now execute the query.
You can then use the column filter condition, should you wish, to filter on specific users:
The resulting query can be saved to HTML/XLSX, PDF etc.

Converting the new Firefox cache2 files to an SQLite DB for investigating

A few weeks ago I was contacted by a user who asked me to have a look at the new Firefox cache2 format files and see if I could convert the metadata (along with the cached file) into an SQLite table in a similar manner to the Chrome Cache extension I did for the Forensic Browser a few months ago. The result of this work is the free tool FirefoxCache2ToSQlite.

Before I discuss the tool and the structure of the data it presents to the user of the Browser, I’ll look a little at the format of the data. For this exercise I’ll ignore the index file for the cache as this just provides a quick way of linking the chrome cache database with individually cached files, this is not important when working with both sets of data in SQLite as you’ll see.

But first a quick taster with a screenshot of a query on the resulting tables created by FirefoxCache2ToSQLite and viewed with the Forensic Browser for SQLite and – this will give you an idea of the sort of data you can see with a formatted report before I go on with the details.

Cache2 file format

Individual cached files are simply written to disk as a normal file with a name which is a sha1 hash of the URL of the file, simple. The file is followed immediately by the metadata followed by a 4 byte big endian integer (all integers are big endian) which is the length of the original file. These four bytes are the last four bytes of the file and equate to the offset of the start of the metadata.

So what is the metadata that follows the file content? The first few bytes are hashes of some of the data and are not really relevant for this article, we do however need to skip them and as the amount of data is variable we need to understand a little of what is there. Essentially there is a 4-byte hash followed by a further 2 bytes for every 262,144 bytes of the original file size. So we need to divide the file size by 262,144 and round up the result, multiply by 2 and add 4 and then add this to the offset value (the last four bytes of the cache file) to get to the start of the metadata itself.

The first section of metadata is 28 bytes split into 7 big-endian numbers, this is followed immediately by the URL of the cached page.

The screenshot below shows that the last four bytes of the cache file point to the end of the file proper, i.e. the beginning of the metadata at offset 0x0258. As this is a small file there follows the 4-byte hash 45C6AA5D followed in turn by the 2 byte hash 0339 on the first 262,144 bytes. This is then followed by 7 big-endian numbers and then the URL of the cached file.

These 7 big-endian numbers, in sequence, are:

Version 00000001 1
Fetch Count 00000001 1
Last Fetched date 54DDEFE3 13/02/2015 12:36:51
54DDEFE3 13/02/2015 12:36:51
Frequency 3AD345A3
Expiry Date 54E13bA3 16/02/2015 00:36:51
Key Length 00000031 49

The remaining bytes comprise a varying number of tuples made up of name value pairs, i.e. pairs of strings as seen in the screenshot below. These strings are written to the tuples table in the name and value columns as appropriate.

Examining the cache

FirefoxCache2ToSQlite creates 2 different SQLite tables. The first contains the fixes tables and has the following schema:

CREATE TABLE cache2 (
filename text primary key,
filesize int,
file blob,
Version int,
FetchCount int,
LastFetchedDate int,
LastModifiedDate int,
Frequency int,
ExpiryDate int,
KeyLength int,
uri text)

As determined by the primary key filed, there can be only one entry for each filename. The file size is the size of the cached file excluding any metadata, the file BLOB field contains the data for the cached file – The Forensic Browser for SQLite will allow you to view this as hex, the text of for images as a picture. The remaining fields are as above and the definitions are outside the scope of this article.

The tuples table has the following schema

CREATE TABLE tuples (
filename text,
name text,
value text)

The filename field in this table is a not a primary key and there will be more than one entry for each corresponding entry on the cache2 table. Each entry will have a filename field so a join can be created in the main cache2 table. The following name and value columns contain the associated data for each tuple.

The following screenshot shows three of the tuples for the file with name ending in 7d62, these tuples are security-data, request-method and response-head along with the associated data for each of them.

A join can be used to create custom queries/reports such as the one below, which includes the file data displayed as hex, the dates and just the response-head data from the tuples table:

Interpreting/Formatting a date encoded as a string

I was contacted this morning by one of our users who wanted help decoding a date held in the database in the format 20140310111203345 found on an iPhone chat app “Nimbuzz” messenger. It’s easy to look at this and decode it by eye, i.e. 2014/03/10 11:12:03.345 but what was wanted was a way of decoding all of the dates in this form that are in a particular table.  This article will show how to format the date in a manner of our choosing for example as above or formatted in the American MM/dd/yyyy HH:mm:ss. The process is straight forward and requires two ‘features’ of SQLite: The first is substr a function that allows us to extract specified characters from a string (or each row in a column). The SQLite definition is:
substr(X,Y,Z) substr(X,Y) The substr(X,Y,Z) function returns a substring of input string X that begins with the Y-th character and which is Z characters long. If Z is omitted then substr(X,Y) returns all characters through the end of the string X beginning with the Y-th. The left-most character of X is number 1. If Y is negative then the first character of the substring is found by counting from the right rather than the left. If Z is negative then the abs(Z) characters preceding the Y-th character are returned. If X is a string then characters indices refer to actual UTF-8 characters. If X is a BLOB then the indices refer to bytes.
By way of example, SubStr(date, 5, 2) extracts 2 characters from the date column at position 5 onwards and returns it as a new string. The second is the SQLite operator Concatenate, the SQLite definition is: The || operator is “concatenate” – it joins together the two strings of its operands. So on to the solution. The test table we will use for this example just contains two rows as follows:
Our task is to create a second column with the formatted date as above, i.e. 2014/03/10 11:12:03.345. The first query we will use will just extract the first four characters (i.e. 4 characters starting at character 1), the year, and created a new ‘virtual’ column that we will call ‘FormattedDate’. The SQL and output table is below:
Working through the complete string and also formatting the time characters gives us the final query as below:
Remember once you have created this query you can save it to the Query manager for use in a future case:
Finally, of course, we don’t have to extract the date in the order that the characters are displayed; if we wanted to use the American MM/dd/yyyy HH:mm:ss format then we can simply rearrange our query as follows:
The final display in the Forensic Browser for SQLite looks as follows:

Using group_concat to amalgamate the results of queries

Recently one of our users contacted me and asked for help creating a custom report for a Skype database after other forensic software had failed to include some very relevant data in their reports’.

In a nutshell, the messages report he had produced using the other software only included the author of a given message and did not list the other person(s) who were part of that conversation. This information is maintained in different places in a Skype main.db SQLite database. This article discusses how to include this information in a user-friendly format.

In order to include the required information, the two tables that we need are the messages table and the participant’s table. Normally in a Skype installation, this table includes most of the information we want in the dialog_partner column, however in this database (from an HTC One phone) this column was blank.

In order to understand the problem more clearly have a look at this extract of selected columns from the participant’s table:

The convo_id column is a unique number that refers to each conversation; the identity column is the identity of the participants. Note that conversation 426 has 4 participants. Also note that r3scue193 (that’s me) appears in every conversation, as you would expect.

If we now look at the messages table we can see that there is a corresponding convo_id column and we can use this column to perform a join between the two tables so that we can include a list of all the participants.

So how do we get a list of all of the participants in a conversation?

If we simply do a LEFT join between the two tables on convo_id then for each row in the messages table we will get a join for each matching row in the participants table, i.e. for a conversation with two participants we would get a duplicated row for each participant (the identity column below), for three participants we would get three rows etc. – this would be very confusing:

The answer is the SQLite aggregate function group_concat, from the SQLite web site (https://www.sqlite.org/lang_aggfunc.html)

group_concat(X)
group_concat(X,Y)
The group_concat() function returns a string which is the concatenation of all non-NULL values of X. If parameter Y is present then it is used as the separator between instances of X. A comma (“,”) is used as the separator if Y is omitted. The order of the concatenated elements is arbitrary.

The query we use is below and utilizes the group_concat function along with the GROUP BY expression:

Code:

SELECT Participants.convo_id,
Group_Concat(Participants.identity, ', ') AS ConversationParticipants
FROM Participants
GROUP BY Participants.convo_id
In English, this selects all of the convo_id and associated participants from the participants table and groups them by the convo_id, and then the identity of each participant in each group is concatenated into a single string with each identity separated by a comma.

This might be clearer if we review the original table:

And then look at the results of the query:

Remember you can use the Case Manager to save the query you have just created for re-use on another case.

We now want to use the query created above with the messages table so that we can list the participants alongside each message. But before we do that we will make a VIEW based on the above query. A VIEW can be thought of as a sort of temporary table and the Forensic Browser for SQLite allows you to create a VIEW by selecting the appropriate option from the Queries menu:

You then need to enter a name for the view

This view can now be used in the same way as any table by dragging it to the query designer. The screenshot below shows that:

Code:

SELECT Participants.convo_id,
Group_Concat(Participants.identity, ', ') AS ConversationParticipants
FROM Participants
GROUP BY Participants.convo_id

Can now be replaced with:

Code:

SELECT *
FROM Convo_participants

All that now remains is to create a JOIN on the messages table using our new view and select the columns we want to display: