How to build a low-level database using java RandomAccessFile
As I searched JavaWorld's site for ideas for this month's Step by Step, I came across only a few articles covering low-level file access. Although high-level APIs such as JDBC give us the flexibility and power needed in large enterprise applications, many smaller applications require a more simple and elegant solution.
In this article, we will build an extension to the
RandomAccessFile class that allows us to store and retrieve records. This "records file" will be equivalent to a persistent hashtable, allowing keyed objects to be stored and retrieved from file storage.
A primer on files and records
Before we jump headlong into the example, let's start with a basic backgrounder. We'll begin by defining some terms pertaining to files and records, then we'll briefly discuss class
java.io.RandomAccessFile and platform-dependency.
The following definitions are tuned to our example, rather than to traditional database terminology.
Record -- A collection of related data stored in a file. A record typically has multiple fields, each being a named and typed item of information.
Key -- An identifier for a record. Keys are usually unique.
File -- A sequential collection of data stored in some sort of stable storage such as a hard drive.
Nonsequential file access -- Allows data to be read from arbitrary locations in the file.
File pointer -- A number holding the position of the next byte of data to be read from a file.
Record pointer -- A record pointer is a file pointer that points to the location where a particular record begins.
Index -- A secondary means of accessing records in a file; that is, it maps keys to record pointers.
Heap -- A sequential file of unordered and variable-sized records. A heap requires some external indexing in order to meaningfully access the records.
Persistence -- Refers to storing an object or record for a certain length of time. This length of time is typically longer than the span of one process, so objects are usually persisted in files or databases.
Overview of class java.io.RandomAccessFile
RandomAccessFile is Java's way of providing nonsequential access to files. The class allows us to jump to a certain location in the file by using the
seek() method. Once the file pointer has been positioned, data can be read from and written to the file using the
DataOutput interfaces. These interfaces allow us to read and write data in a platform-independent manner. Other handy methods in
RandomAccessFile allow us to check and set the length of the file.
Modern databases rely on disk drives for storage. Data on a disk drive is stored in blocks, which are distributed across tracks and surfaces. The disk's seek time and rotational delay dictate how data can be most efficiently stored and retrieved. A typical database management system relies closely on the disk's attributes in order to streamline performance. Unfortunately (or fortunately, depending on your interest in low-level file I/O!), these parameters lie far from reach when using a high-level file API such as
java.io. Given this fact, our example will disregard the optimizations that knowledge of the disk's parameters could provide.
Designing the RecordsFile example
Now we're ready to design our example. To start, I'll lay out some design requirements and goals, resolve issues of concurrent access, and specify the low-level file format. Before proceeding to the implementation, we'll also look at the main record operations and their corresponding algorithms.
Requirements and goals
Our main goal in this example is to use a
RandomAccessFile to provide a way of storing and retrieving record data. We'll associate a key of type
String with each record as a means of uniquely identifying it. The keys will be limited to a maximum length, although the record data will not be limited. For the purposes of this example, our records will consist of only one field -- a "blob" of binary data. The file code will not attempt to interpret the record data in any way.
As a second design goal, we'll require that the number of records our file supports not be fixed at creation time. We will allow the file to grow and shrink as records are inserted and removed. Because our index and record data will be stored in the same file, this restriction will cause us to add extra logic to dynamically increase the index space by reorganizing records.
Accessing data in a file is orders of magnitude slower than accessing data in memory. This means that the number of file accesses the database performs will be the determining performance factor. We'll require that our main database operations not depend on the number of records in the file. In other words, they'll be of constant order time with respect to file accesses.
As a final requirement, we'll assume our index is small enough to load into memory. This will make it easier for our implementation to fulfill the requirement that dictates access time. We'll mirror the index in a
Hashtable, which provides immediate record header lookups.
The code for this article has a bug that causes it to throw a NullPointerException in many possible cases. There is a routine named insureIndexSpace(int) in the abstract class BaseRecordsFile. The code is intended to move existing records to the end of the file if the index area needs to expand. After the "first" record's capacity is reset to its actual size, it is moved to the end. The dataStartPtr is then set to point to the second record in the file. Unfortunately, if there was free space in the first record, the new dataStartPtr will not point to a valid record, since it was incremented by the first record's length rather than its capacity. The modified Java source for BaseRecordsFile can be found in Resources.
from Ron Walkup
Senior Software Engineer
Synchronization and concurrent file access
For simplicity, we start by supporting only a single-thread model, in which file requests are prohibited from happening concurrently. We can accomplish this by synchronizing the public access methods of the
RecordsFile classes. Note that you can relax this restriction to add support for concurrent reads and writes on nonconflicting records: You'll need to maintain a list of locked records and interleave reads and writes for concurrent requests.
Details of the file format
We will now explicitly define the format of the records file. The file consists of three regions, each with its own format.
The file headers region. This first region holds the two essential headers needed to access records in our file. The first header, called the data start pointer, is a
long that points to the start of the record data. This value tells us the size of the index region. The second header, called the num records header, is an
int that gives the number of records in the database. The headers region starts on the first byte of the file and extends for
FILE_HEADERS_REGION_LENGTH bytes. We'll use
readInt() to read the headers, and
writeInt() to write the headers.
The index region. Each entry in the index consists of a key and a record header. The index starts on the first byte after the file headers region and extends until the byte before the data start pointer. From this information, we can calculate a file pointer to the start of any of the n entries in the index. Entries have a fixed length -- the key data starts on the first byte in the index entry and extends
MAX_KEY_LENGTH bytes. The corresponding record header for a given key follows immediately after the key in the index. The record header tells us where the data is located, how many bytes the record can hold, and how many bytes it is actually holding. Index entries in the file index are in no particular order and do not map to the order in which the records are stored in the file.
Record data region. The record data region starts on the location indicated by the data start pointer and extends to the end of the file. Records are positioned back-to-back in the file with no free space permitted between records. This part of the file consists of raw data with no header or key information. The database file ends on the last block of the last record in the file, so there is no extra space at the end of the file. The file grows and shrinks as records are added and deleted.
The size allocated to a record does not always correspond to the actual amount of data the record contains. The record can be thought of as a container -- it may be only partially full. Valid record data is positioned at the start of the record.
Supported operations and their algorithms
RecordsFile will support the following main operations:
Insert -- Adds a new record to the file
Read -- Reads a record from the file
Update -- Updates a record
Delete -- Deletes a record
Ensure capacity -- Grows the index region to accommodate new records
Before we step through the source code, let's go over the chosen algorithms for each of these operations:
Insert. This operation inserts a new record into the file. To insert, we:
- Make sure the key being inserted isn't already contained in the file
- Ensure the index region is large enough for the additional entry
- Find free space in the file large enough to hold the record
- Write the record data to the file
- Add the record header to the index
Read. This operation retrieves a requested record from the file based on a key. To retrieve a record, we:
- Use the index to map the given key to the record header
- Seek down to the start of the data (using the pointer to the record data stored in the header)
- Read the record's data from the file
Update. This operation updates an existing record with new data, replacing the new data with the old. The steps for our update vary, depending on the size of the new record data. If the new data fits into the existing record, we:
- Write the record data to the file, overwriting the previous data
- Update the attribute that holds the length of the data in the record's header
Otherwise, if the data is too big for the record, we:
- Perform a delete operation on the existing record
- Perform an insert of the new data
Delete. This operation removes a record from the file. To delete a record, we:
Reclaim the space allocated to the record being removed by either shrinking the file, if the record is the last one in the file, or by adding its space to an adjacent record
- Remove the record's header from the index by replacing the entry being deleted with the last entry in the index; this ensures the index is always full, with no empty spaces between entries
Ensure capacity. This operation makes sure the index region is large enough to accommodate additional entries. In a loop, we move records from the front to the end of the file until there is sufficient space. To move one record we:
Locate the record header of the first record in the file; note that this is the record with data at the top of the record data region -- not the record with the first header in the index
Read the target record's data
Grow the file by the size of the target record's data by using the
Write the record data to the bottom of the file
Update the data pointer in the record that was moved
- Update the global header that points to the first record's data
Implementation details -- stepping though the source code
We're now ready to get our hands dirty and work through the code for the example. You can download the complete source from Resources.
Note: You must use the Java 2 platform (formerly known as JDK 1.2) to compile the source.
BaseRecordsFile is an abstract class and is the main implementation of our example. It defines the main access methods as well as a slew of utility methods for manipulating records and index entries.
Java Hibernate Tutorial Part 13 - Compound Primary Key
Java Programming 0 Comment 394 View Patrick WashingtonDC