Friday, December 3, 2010

Limitations of SQLite version 3

Found a good source of Limitations of SQLite embedded database. And I am quite impressed about the numbers that have as a limit in any category. I my opinion those are not limitations Open-mouthed smile. Rather those are supports. You can take a look at this http://www.sqlite.org/limits.html where all the max stuff is listed. Here are few examples, and all are taken from the original site.

Maximum Number Of Rows In A Table

The theoretical maximum number of rows in a table is 264 (18446744073709551616 or about 1.8e+19). This limit is unreachable since the maximum database size of 14 terabytes will be reached first. A 14 terabytes database can hold no more than approximately 1e+13 rows, and then only if there are no indices and if each row contains very little data.

Maximum length of a string or BLOB

The maximum number of bytes in a string or BLOB in SQLite is defined by the preprocessor macro SQLITE_MAX_LENGTH. The default value of this macro is 1 billion (1 thousand million or 1,000,000,000). You can raise or lower this value at compile-time using a command-line option like this:

-DSQLITE_MAX_LENGTH=123456789

The current implementation will only support a string or BLOB length up to 231-1 or 2147483647. And some built-in functions such as hex() might fail well before that point. In security-sensitive applications it is best not to try to increase the maximum string and blob length. In fact, you might do well to lower the maximum string and blob length to something more in the range of a few million if that is possible.

During part of SQLite's INSERT and SELECT processing, the complete content of each row in the database is encoded as a single BLOB. So the SQLITE_MAX_LENGTH parameter also determines the maximum number of bytes in a row.

The maximum string or BLOB length can be lowered at run-time using the sqlite3_limit(db,SQLITE_LIMIT_LENGTH,size) interface.

Maximum Number Of Columns

The default setting for SQLITE_MAX_COLUMN is 2000. You can change it at compile time to values as large as 32767. On the other hand, many experienced database designers will argue that a well-normalized database will never need more than 100 columns in a table.

In most applications, the number of columns is small - a few dozen. There are places in the SQLite code generator that use algorithms that are O(N²) where N is the number of columns. So if you redefine SQLITE_MAX_COLUMN to be a really huge number and you generate SQL that uses a large number of columns, you may find that sqlite3_prepare_v2() runs slowly.

The maximum number of columns can be lowered at run-time using the sqlite3_limit(db,SQLITE_LIMIT_COLUMN,size) interface.

Maximum Number Of Tables In A Join

SQLite does not support joins containing more than 64 tables. This limit arises from the fact that the SQLite code generator uses bitmaps with one bit per join-table in the query optimizer.

SQLite uses a very efficient O(N²) greedy algorithm for determining the order of tables in a join and so a large join can be prepared quickly. Hence, there is no mechanism to raise or lower the limit on the number of tables in a join.

Note: I am sure those are pretty good numbers as far as an embedded database can offer.