|
Biblio::Isis perl module with example scripts.
Biblio-Isis-0.24.tar.gz 40 Kb | |
| Latest source is always available from Subversion repository |
Biblio::Isis - Read CDS/ISIS, WinISIS and IsisMarc database
use Biblio::Isis;
my $isis = new Biblio::Isis(
isisdb => './cds/cds',
);
for(my $mfn = 1; $mfn <= $isis->count; $mfn++) {
print $isis->to_ascii($mfn),"\n";
}
This module will read ISIS databases created by DOS CDS/ISIS, WinIsis or
IsisMarc. It can be used as perl-only alternative to OpenIsis module which
seems to depriciate it's old XS bindings for perl.
It can create hash values from data in ISIS database (using to_hash),
ASCII dump (using to_ascii) or just hash with field names and packed
values (like ^asomething^belse).
Unique feature of this module is ability to include_deleted records.
It will also skip zero sized fields (OpenIsis has a bug in XS bindings, so
fields which are zero sized will be filled with random junk from memory).
It also has support for identifiers (only if ISIS database is created by
IsisMarc), see to_hash.
This module will always be slower than OpenIsis module which use C library. However, since it's written in perl, it's platform independent (so you don't need C compiler), and can be easily modified. I hope that it creates data structures which are easier to use than ones created by OpenIsis, so reduced time in other parts of the code should compensate for slower performance of this module (speed of reading ISIS database is rarely an issue).
Open ISIS database
my $isis = new Biblio::Isis(
isisdb => './cds/cds',
read_fdt => 1,
include_deleted => 1,
hash_filter => sub {
my ($v,$field_number) = @_;
$v =~ s#foo#bar#g;
},
debug => 1,
join_subfields_with => ' ; ',
);
Options are described below:
This is full or relative path to ISIS database files which include
common prefix of .MST, and .XRF and optionally .FDT (if using
read_fdt option) files.
In this example it uses ./cds/cds.MST and related files.
Boolean flag to specify if field definition table should be read. It's off by default.
Don't skip logically deleted records in ISIS.
Filter code ref which will be used before data is converted to hash. It will
receive two arguments, whole line from current field (in $_[0]) and
field number (in $_[1]).
Dump a lot of debugging output even at level 1. For even more increase level.
Define delimiter which will be used to join repeatable subfields. This option is included to support lagacy application written against version older than 0.21 of this module. By default, it disabled. See to_hash.
Remove all empty subfields while reading from ISIS file.
Return number of records in database
print $isis->count;
Read record with selected MFN
my $rec = $isis->fetch(55);
Returns hash with keys which are field names and values are unpacked values for that field like this:
$rec = {
'210' => [ '^aNew York^cNew York University press^dcop. 1988' ],
'990' => [ '2140', '88', 'HAY' ],
};
Returns current MFN position
my $mfn = $isis->mfn;
Returns ASCII output of record with specified MFN
print $isis->to_ascii(42);
This outputs something like this:
210 ^aNew York^cNew York University press^dcop. 1988 990 2140 990 88 990 HAY
If read_fdt is specified when calling new it will display field names
from .FDT file instead of numeric tags.
Read record with specified MFN and convert it to hash
my $hash = $isis->to_hash($mfn);
It has ability to convert characters (using hash_filter) from ISIS
database before creating structures enabling character re-mapping or quick
fix-up of data.
This function returns hash which is like this:
$hash = {
'210' => [
{
'c' => 'New York University press',
'a' => 'New York',
'd' => 'cop. 1988'
}
],
'990' => [
'2140',
'88',
'HAY'
],
};
You can later use that hash to produce any output from ISIS data.
If database is created using IsisMarc, it will also have to special fields
which will be used for identifiers, i1 and i2 like this:
'200' => [
{
'i1' => '1',
'i2' => ' '
'a' => 'Goa',
'f' => 'Valdo D\'Arienzo',
'e' => 'tipografie e tipografi nel XVI secolo',
}
],
In case there are repeatable subfields in record, this will create following structure:
'900' => [ {
'a' => [ 'foo', 'bar', 'baz' ],
}]
Or in more complex example of
902 ^aa1^aa2^aa3^bb1^aa4^bb2^cc1^aa5
it will create
902 => [
{ a => ["a1", "a2", "a3", "a4", "a5"], b => ["b1", "b2"], c => "c1" },
],
This behaviour can be changed using join_subfields_with option to new,
in which case to_hash will always create single value for each subfield.
This will change result to:
This method will also create additional field 000 with MFN.
There is also more elaborative way to call to_hash like this:
my $hash = $isis->to_hash({
mfn => 42,
include_subfields => 1,
});
Each option controll creation of hash:
Specify MFN number of record
This option will create additional key in hash called subfields which will
have original record subfield order and index to that subfield like this:
902 => [ {
a => ["a1", "a2", "a3", "a4", "a5"],
b => ["b1", "b2"],
c => "c1",
subfields => ["a", 0, "a", 1, "a", 2, "b", 0, "a", 3, "b", 1, "c", 0, "a", 4],
} ],
Define delimiter which will be used to join repeatable subfields. You can specify option here instead in new if you want to have per-record control.
You can override hash_filter defined in new using this option.
Return name of selected tag
print $isis->tag_name('200');
Read content of .CNT file and return hash containing it.
print Dumper($isis->read_cnt);
This function is not used by module (.CNT files are not required for this
module to work), but it can be useful to examine your index (while debugging
for example).
Unpack one of two 26 bytes fixed length record in .CNT file.
Here is definition of record:
off key description size 0: IDTYPE BTree type s 2: ORDN Nodes Order s 4: ORDF Leafs Order s 6: N Number of Memory buffers for nodes s 8: K Number of buffers for first level index s 10: LIV Current number of Index Levels s 12: POSRX Pointer to Root Record in N0x l 16: NMAXPOS Next Available position in N0x l 20: FMAXPOS Next available position in L0x l 24: ABNORMAL Formal BTree normality indicator s length: 26 bytes
This will fill $self object under cnt with hash. It's used by read_cnt.
Some parts of CDS/ISIS documentation are not detailed enough to exmplain some variations in input databases which has been tested with this module. When I was in doubt, I assumed that OpenIsis's implementation was right (except for obvious bugs).
However, every effort has been made to test this module with as much databases (and programs that create them) as possible.
I would be very greatful for success or failure reports about usage of this
module with databases from programs other than WinIsis and IsisMarc. I had
tested this against ouput of one isis.dll-based application, but I don't
know any details about it's version.
As this is young module, new features are added in subsequent version. It's a good idea to specify version when using this module like this:
use Biblio::Isis 0.23
Below is list of changes in specific version of module (so you can target older versions if you really have to):
Added ignore_empty_subfields
Added hash_filter to to_hash
Fixed bug with documented join_subfields_with in new which wasn't
implemented
Added field number when calling hash_filter
Added join_subfields_with to new and to_hash.
Added include_subfields to to_hash.
Added $isis->mfn, support for repeatable subfields and
$isis->to_hash({ mfn => 42, ... }) calling convention
Dobrica Pavlinusic CPAN ID: DPAVLIN dpavlin@rot13.org http://www.rot13.org/~dpavlin/
This module is based heavily on code from LIBISIS.PHP library to read ISIS files V0.1.1
written in php and (c) 2000 Franck Martin <franck@sopac.org> and released under LGPL.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
The full text of the license can be found in the LICENSE file included with this module.
Biblio::Isis::Manual for CDS/ISIS manual appendix F, G and H which describe file format
OpenIsis web site http://www.openisis.org
perl4lib site http://perl4lib.perl.org
.CNT file.N0x files.L0x files.IFP file
CDS/ISIS manual appendix F, G and H
This is partial scan of CDS/ISIS manual (appendix F, G and H, pages
257-272) which is than converted to text using OCR and proofread.
However, there might be mistakes, and any corrections sent to
dpavlin@rot13.org will be greatly appreciated.
This digital version is made because current version available in ditial form doesn't contain details about CDS/ISIS file format and was essential in making the Biblio::Isis manpage module.
This extract of manual has been produced in compliance with section (d) of WinIsis LICENCE for receiving institution/person which say:
The receiving institution/person may:
(d) Print/reproduce the CDS/ISIS manuals or portions thereof,
provided that such copies reproduce the copyright notice;
This section describes the various files of the CDS/ISIS system, the file naming conventions and the file extensions used for each type of file. All CDS/ISIS files have standard names as follows:
nnnnnn.eee
where:
nnnnnnis the file name (all file names, except program names, are limited to a maximum of 6 characters)
.eeeis the file extension identifying a particular type of file.
Files marked with * are ASCII files which you may display or print. The
other files are binary files.
System files are common to all CDS/ISIS users and include the various executable programs as well as system menus, worksheets and message files provided by Unesco as well as additional ones which you may create.
The name of the program file, as supplied by Unesco is
ISIS.EXE
Depending on the release and/or target computer, there may also be one
or more overlay files. These, if present, have the extension OVL.
Check the contents of your system diskettes or tape to see whether
overlay files are present.
All system menus and worksheets have the file extension FMT and the names are built as follows:
pctnnn.FMT
where:
pis the page number (A for the first page, B for the second, etc.)
cis the language code (e.g. E for English), which must be one of those provided for in the language selection menu xXLNG.
tis X for menus and Y for system worksheets
nnnis a unique identifier
For example the full name of the English version of the menu xXGEN is
AEXGEN.FMT.
The page number is transparent to the CDS/ISIS user. Like the file extension the page number is automatically provided by the system. Therefore when a CDS/ISIS program prompts you to enter a menu or worksheet name you must not include the page number. Furthermore as file names are restricted to 6 characters, menus and worksheets names may not be longer than 5 characters.
System menus and worksheets may only have one page.
The language code is mandatory for system menus and standard system worksheets. For example if you want to link a HELP menu to the system menu EXGEN, its name must begin with the letter E.
The X convention is only enforced for standard system menus. It is a good practice, however, to use the same convention for menus that you create, and to avoid creating worksheets (including data entry worksheets) with X in this position, that is with names like xXxxx.
Furthermore, if a data base name contains X or Y in the second
position, then the corresponding data entry worksheets will be created
in the system worksheet directory (parameter 2 of SYSPAR.PAR) rather
then the data base directory. Although this will not prevent normal
operation of the data base, it is not recommended.
System messages and prompts are stored in standard CDS/ISIS data bases. All corresponding data base files (see below) are required when updating a message file, but only the Master file is used to display messages.
There must be a message data base for each language supported through the language selection menu xXLNG.
The data base name assigned to message data bases is xMSG (where x is the language code).
System tables are used by CDS/ISIS to define character sets. Two are required at present:
ISISUC.TAB*defines lower to upper-case translation
ISISAC.TAB*defines the alphabetic characters.
Certain CDS/ISIS print functions do not send the output directly to the
printer but store it on a disk file from which you may then print it at
a convenient time. These files have all the file extension LST and
are reused each time the corresponding function is executed.
In addition CDS/ISIS creates temporary work files which are normally
automatically discarded at the end of the session. If the session
terminates abnormally, however, they will not be deleted. A case of
abnormal termination would be a power failure while you are using a
CDS/ISIS program. Also these files, however, are reused each time,
so that you do not normally need to delete them manually. Work files
all have the extension TMP.
The print and work files created by CDS/ISIS are given below:
IFLIST.LST*Inverted file listing file (produced by ISISINV)
WSLIST.LST*Worksheet/menu listing file (produced by ISISUTL)
xMSG.LST*System messages listing file (produced by ISISUTL)
x.LST*Printed output (produced by ISISPRT when printing no print file name is supplied)
SORTIO.TMPSort work file 1
SORTII.TMPSort work file 2
SORTI2.TMPSort work file 3
SORTI3.TMPSort work file 4
SORT20.TMPSort work file 5
SORT2I.TMPSort work file 6
SORT22.TMPSort work file 7
SORT23.TMPSort work file 8
TRACE.TMP*Trace file created by certain programs
ATSF.TMPTemporary storage for hit lists created during retrieval
ATSQ.TMPTemporary storage for search expressions
mandatory files, which must always be present. These are normally established when the data base is defined by means of the ISISDEF services and should never be deleted;
auxiliary files created by the system whenever certain functions are performed. These can periodically be deleted when they are no longer needed.
user files created by the data base user (such as display formats), which are fully under the user's responsibility.
Each data base consists of a number of physically distinct files as indicated below. There are three categories of data base files:
In the following description xxxxxx is the 1-6 character data base
name.
xxxxxx.FDT*Field Definition Table
xxxxxx.FST*Field Select Table for Inverted file
xxxxxx.FMT*Default data entry worksheet (where p is the page number).
Note that the data base name is truncated to 5 characters if necessary
xxxxxx.PFT*Default display format
xxxxxx.MSTMaster file
xxxxxx.XRFCrossreference file (Master file index)
xxxxxx.CNTB*tree (search term dictionary) control file
xxxxxx.N01B*tree Nodes (for terms up to 10 characters long)
xxxxxx.L01B*tree Leafs (for terms up to 10 characters long)
xxxxxx.N02B*tree Nodes (for terms longer than 10 characters)
xxxxxx.L02B*tree Leafs (for terms longer than 10 characters)
xxxxxx.IFPInverted file postings
xxxxxx.ANY*ANY file
xxxxx.STW*Stopword file used during inverted file generation
xxxxxx.LN1*Unsorted Link file (short terms)
xxxxxx.LN2*Unsorted Link file (long terms)
xxxxxx.LKl*Sorted Link file (short terms)
xxxxxx.LK2*Sorted Link file (long terms)
xxxxxx.BKPMaster file backup
xxxxxx.XHFHit file index
xxxxxx.HITHit file
xxxxxx.SRT*Sort convertion table (see "Uppercase conversion table (1SISUC.TAB)" on page 227)
yyyyyy.FST*Field Select tables used for sorting
yyyyyy.PFT*Additional display formats
yyyyyy.FMT*Additional data entry worksheets
yyyyyy.STW*Additional stopword files
yyyyyy.SAVSave files created during retrieval
The name of user files is fully under user control. However, in order
to avoid possible name conflicts it is advisable to establish some
standard conventions to be followed by all CDS/ISIS users at a given
site, such as for example to define yyyyyy as follows:
xxxyyy
where:
xxxis a data base identifier (which could be the first three letters of the data base name if no two data bases names are allowed to begin with the same three letters)
yyya user chosen name.
The Master record is a variable length record consisting of three sections: a fixed length leader; a directory; and the variable length data fields.
The leader consists of the following 7 integers (fields marked with * are 31-bit signed integers):
MFN*Master file number
MFRLRecord length (always an even number)
MFBWB*Backward pointer - Block number
MFBWPBackward pointer - Offset
BASEOffset to variable fields (this is the combined length of the Leader and Directory part of the record, in bytes)
NVFNumber of fields in the record (i.e. number of directory entries)
STATUSLogical deletion indicator (0=record active; 1=record marked for deletion)
MFBWB and MFBWP are initially set to 0 when the record is
created. They are subsequently updated each time the record itself is
updated (see below).
The directory is a table indicating the record contents. There is one directory entry for each field present in, the record (i.e. the directory has exactly NVF entries). Each directory entry consists of 3 integers:
TAGField Tag
POSOffset to first character position of field in the variable field
section (the first field has POS=0)
LENField length in bytes
The total directory length in bytes is therefore 6*NVF; the BASE field
in the leader is always: 18+6*NVF.
This section contains the data fields (in the order indicated by the directory). Data fields are placed one after the other, with no separating characters.
The first record in the Master file is a control record which the
system maintains automatically. This is never accessible to the ISIS
user. Its contents are as follows (fields marked with * are 31-bit
signed integers):
CTLMFN*always 0
NXTMFN*MFN to be assigned to the next record created in the data base
NXTMFB*Last block number allocated to the Master file (first block is 1)
NXTMFPOffset to next available position in last block
MFTYPEalways 0 for user data base file (1 for system message files)
(the last four fields are used for statistics during backup/restore).
The Master file records are stored consecutively, one after the other,
each record occupying exactly MFRL bytes. The file is stored as
physical blocks of 512 bytes. A record may begin at any word boundary
between 0-498 (no record begins between 500-510) and may span over two
or more blocks.
As the Master file is created and/or updated, the system maintains an
index indicating the position of each record. The index is stored in
the Crossreference file (.XRF)
The XRF file is organized as a table of pointers to the Master file.
The first pointer corresponds to MFN 1, the second to MFN 2, etc.
Each pointer consists of two fields:
RECCNT*MFCXX1*MFCXX2*MFCXX3*XRFMFB(21 bits) Block number of Master file block containing the record
XRFMFP(11 bits) Offset in block of first character position of Master record (first block position is 0)
which are stored in a 31-bit signed integer (4 bytes) as follows:
pointer = XRFMFB * 2048 + XRFMFP
(giving therefore a maximum Master file size of 500 Megabytes).
Each block of the XRF file is 512 bytes and contains 127 pointers. The
first field in each block (XRFPOS) is a 31-bit signed integer whose
absolute value is the XRF block number. A negative XRFPOS indicates
the last block.
Deleted records are indicated as follows:
XRFMFB < 0 and XRFMFP > 0logically deleted record (in this case ABS(XRFMFB) is the correct block
pointer and XRFMFP is the offset of the record, which can therefore
still be retrieved)
XRFMFB = -1 and XRFMFP = 0physically deleted record
XRFMFB = 0 and XRFMFP = 0inexistent record (all records beyond the highest MFN assigned in the
data base)
New records are always added at the end of the Master file, at the
position indicated by the fields NXTMFB/NXTMFP in the Master file
control record. The MFN to be assigned is also obtained from the field
NXTMFN in the control record.
After adding the record, NXTMFN is increased by 1 and NXTMFB/NXTMFP
are updated to point to the next available position. In addition a new
pointer is created in the XRF file and the XRFMFP field corresponding
to the record is increased by 1024 to indicate that this is a new
record to be inverted (after the inversion of the record 1024 is
subtracted from XRFMFP).
Whenever you update a record (i.e., you call it in data entry and exit with option X from the editor) the system writes the record back to the Master file. Where it is written depends on the status of the record when it was initially read.
This condition is indicated by the following:
On XRF XRFMFP < 512 and
On MST MFBWB = 0 and MFBWP = 0
In this case, the record is always rewritten at the end of the Master
file (as if it were a new record) as indicated by NXTMFB/NXTMFP in the
control record. In the new version of the record MFBWB/MFBWP are set to
point to the old version of the record, while in the XRF file the
pointer points to the new version. In addition 512 is added to XRFMFP
to indicate that an inverted file update is pending. When the inverted
file is updated, the old version of the record is used to determine the
postings to be deleted and the new version is used to add the new
postings. After the update of the Inverted file, 512 is subtracted from
XRFMFP, and MFBWB/MFBWP are reset to 0.
This condition is indicated by the following:
On XRF XRFMFP > 512 and
On MST MFBWB > 0
In this case MFBWB/MFBWP point to the version of the record which is
currently reflected in the Inverted file. If possible, i.e. if the
record length was not increased, the record is written back at its
original location, otherwise it is written at the end of the file. In
both cases, MFBWB/MFBWP are not changed.
Record deletion is treated as an update, with the following additional markings:
On XRF XRFMFB is negative
On MST STATUS is set to 1
As indicated above, as Master file records are updated the MST file
grows in size and there will be lost space in the file which cannot be
used. The reorganization facilities allow this space to be reclaimed by
recompacting the file.
During the backup phase a Master file backup file is created (.BKP).
The structure and format of this file is the same as the Master file
(.MST), except that a Crossreference file is not required as all the
records are adjacent. Records marked for deletion are not backed up.
Because only the latest copy of each record is backed up, the system
does not allow you to perform a backup whenever an Inverted file update
is pending for one or more records.
During the restore phase the backup file is read sequentially and the
program recreates the MST and XRF file. At this point alt records which
were marked for logical deletion (before the backup) are now marked as
physically deleted (by setting XRFMFB = -1 and XRFMFP = 0.
Deleted records are detected by checking holes in the MFN numbering.
The CDS/ISIS Inverted file consists of six physical files, five of
which contain the dictionary of searchable terms (organized as a
B*tree) and the sixth contains the list of postings associated with
each term. In order to optimize disk storage, two separate B*trees are
maintained, one for terms of up to 10 characters (stored in files
.N01/.L01) and one for terms longer than 10 characters, up to a maximum
of 30 characters (stored in files .N02/.L02). The file CNT contains
control fields for both B*trees. In each B*tree the file .N0x contains
the nodes of the tree and the .L0x file contains the leafs. The leaf
records point to the postings file .IFP.
The relationship between the various files is schematically represented in Figure 67.
The physical relationship between these six files is a
pointer, which represents the relative address of the record being
pointed to. A relative address is the ordinal record number of a record
in a given file (i.e. the first record is record number 1, the second
is record number 2, etc.). The file .CNT points to the file .N0x,
.N0x points to .L0x, and .L0x points to .IFP. Because the
.IFP is a packed file, the pointer from .L0x to .IFP has two
components: the block number and the offset within the block, each expressed
as an integer.
.CNT fileThis file contain two 26-byte fixed length records (one for each B*tree) each containing 10 integers as follows (fields marked with * are 31-bit signed integers):
IDTYPEB*tree type (1 for .N01/.L01, 2 for .N02/.L02)
ORDNNodes order (each .N0x record contains at most 2*ORDN keys)
ORDFLeafs order (each .L0x record contains at most 2*ORDF keys)
NNumber of memory buffers allocated for nodes
KNumber of buffers allocated to lst level index (K < N)
LIVCurrent number of index levels
POSRX*Pointer to Root record in .N0x
NMAXPOS*Next available position in .N0x file
FMAXPOS*Next available position in .L0x file
ABNORMALFormal B*tree normality indicator (0 if B*tree is abnormal, 1 if B*tree
is normal). A B*tree is abnormal if the nodes file .N0x contains only
the Root.
ORDN, ORDF, N and K are fixed for a given generated system.
Currently these values are set as follows:
ORDN = 5; ORDF = 5; N = 15; K = 5 for both B*trees
+--------------+
| Root address |
+-------|------+
| .CNT file
| -------------
| .N0x file
+-----------V--------+
| Key1 Key2 ... Keyn | Root
+---|-------------|--+
| |
+-----+ +------+
| |
+----------V----------+ +---------V----------+ 1st level
| Key1 Key2 ... Keyn | ... | Key1 Key2 ... Keyn | index
+--|------------------+ +-----------------|--+
| :
: +-------+
| |
+--V------------------+ +---------V----------+ last level
| Key1 Key2 ... Keyn | ... | Key1 Key2 ... Keyn | index
+---------|-----------+ +---------|----------+
| |
| | -------------
| | .L0x file
+---------V-----------+ +---------V----------+
| Key1 Key2 ... Keyn | ... | Key1 Key2 ... Keyn |
+--|------------------+ +--------------------+
|
| -------------
| .IPF file
+--V----------------------------------+
| P1 P2 P3 ..................... Pn |
+-------------------------------------+
Figure 67: Inverted file structure
The other values are set as required when the B*trees are generated.
.N0x filesThese files contain the indexes) of the dictionary of searchable terms
(.N01 for terms shorter than 11 characters and .N02 for terms longer
than 10 characters). The .N0x file records have the following format
(fields marked with * are 31-bit signed integers):
POS*an integer indicating the relative record number (1 for the first record, 2 for the second record, etc.)
OCKan integer indicating the number of active keys in the record
( 1 <= OCK <= 2*ORDN )
ITan integer indicating the type of B*tree (1 for .N01, 2 for .N02)
IDXan array of ORDN entries (OCK of which are active), each having the
following format:
KEYa fixed length character string of length .LEx (LE1 =10, LE2 = 30)
PUNTa pointer to the .N0x record (if PUNT > 0) or .L0x record
(if PUNT < 0) whose IDX(1).KEY = KEY. PUNT = 0 indicates
an inactive entry. A positive PUNT indicates a branch to a hierarchically
lower level index. The lowest level index (PUNT < 0) points the leafs in
the .L0x file.
.L0x filesThese files contain the full dictionary of searchable terms (.L01 for
terms shorter than 11 characters and .L02 for terms longer than 10
characters). The .L0x file records have the following format (fields
marked with * are 31-bit signed integers):
POS*an integer indicating the relative record number (1 for the first record, 2 for the second record, etc.)
OCKan integer indicating the number of active keys in the record
(1 < OCK <= 2*ORDF)
ITan integer indicating the type of B*tree (1 for .N01, 2 for .N02)
PS*is the immediate successor of IDX[OCK].KEY in this record (this is used
to speed up sequential access to the file)
IDXan array of ORDN entries (OCK of which are active), each having the
following format:
.IFP fileThis file contains the list of postings for each dictionary term. Each list of postings has the format indicated below. The file is structured in blocks of 512 characters, where (for an initially loaded and compacted file) the lists of postings for each term are adjacent, except as noted below.
The general format of each block is:
IFPBLKa 31-bit signed integer indicating the Block number of this block (blocks are numbered from 1)
IFPRECAn array of 127 31-bit signed integers
IFPREC[1] and FPREC[2] of the first block are a pointer to the
next available position in the .IFP file.
Pointers from .L0x to .IFP and pointers within .IFP consist of two
31-bit signed integers: the first integer is a block number, and the
second integer is a word offset in IFPREC (e.g. the offset to the
first word in IFPREC is 0). The list of postings associated with the
first search term will therefore start at 1/0.
Each list of postings consists of a header (5 double-words) followed by the actual list of postings (8 bytes for each posting). The header has the following format (each field is a 31-bit signed integer):
IFPNXTB*Pointer to next segment (Block number)
IFPNXTP*Pointer to next segment (offset)
IFPTOTP*Total number of postings (accurate only in first segment)
IFPSEGP*Number of postings in this segment (IFPSEGP <= IFPTOTP)
IFPSEGC*Segment capacity (i.e. number of postings which can be stored in this segment)
Each posting is a 64-bit string partitioned as follows:
PMFN(24 bits) Master file number
PTAG(16 bits) Field identifier (assigned from the FST)
POCC(8 bits) Occurrence number
PCNT(16 bits) Term sequence number in field
Each field is stored in a strict left-to-right sequence with leading zeros added if necessary to adjust the corresponding bit string to the right (this allows comparisons of two postings as character strings).
The list of postings is stored in ascending PMFN/PTAG/POCC/PCNT
sequence. When the inverted file is loaded sequentially (e.g. after a
full inverted file generation with ISISINV), each list consists of one
or more adjacent segments. If IFPTOT <= 32768 then:
IFPNXTB/IFPNXTP = 0/0 and IFPTOT = IFPSEGP = IFPSEGC.
As updates are performed, additional segments may be created whenever
new postings must be added. In this case a new segment with capacity
IFPTOTP is created and linked to other segments (through the pointer
IFPNXTB/IFPNXTP) in such a way that the sequence
PMFN/PTAG/POCC/PCNT is maintained. Whenever such a split occurs
the postings of the segment where the new posting should have been inserted
are equally distributed between this segment and the newly created segment.
New segments are always written at the end of the file (which is maintained
in IFPREC[1]/IFPREC[2] of the first .IFP block.
For example, assume that a new posting Px has to be inserted between P2
and P3 in the following list:
+----------------------------+ | 0 0 5 5 5 | P1 P2 P3 P4 P5 | +----------------------------+
after the split (and assuming that the next available position in .IFP
is 3/4) the list of postings will consist of the following two segments:
+----------------------------+
| 3 4 5 3 5 | P2 P2 Px -- -- |
+--|-------------------------+
|
+--V-------------------------+
| 0 0 5 3 5 | P3 P4 P5 -- -- |
+----------------------------+
In this situation, no new segment will be created until either segment becomes again full.
As mentioned above, the posting lists are normally stored one after the
other. However, in order to facilitate access to the .IFP file the
segments are stored in such a way that:
the header and the first posting in each list (28 bytes) are never split between two blocks.
a posting is never split between two blocks; if there is not enough room in the current block the whole posting is stored in the next block.
UNESCO has developed and owns the intellectual property of the CDS/ISIS software (in whole or in part, including all files and documentation, from here on referred to as CDS/ISIS) for the storage and retrieval of information.
For complete text of licence visit http://www.unesco.org/isis/files/winisislicense.html.
2007-05-18 21:16:52 dpavlin r72
/trunk/lib/Biblio/Isis.pm: push version to 0.24
2007-05-18 21:16:43 dpavlin r71
/trunk/scripts/dump_isisdb.pl: added -v to dump script which will display erased records and empty subfields
2007-05-18 20:26:01 dpavlin r70
/trunk/lib/Biblio/Isis.pm: added ignore_empty_subfields [0.24_1]
2006-10-29 15:37:43 dpavlin r69
/trunk/lib/Biblio/Isis.pm, /trunk/t/2_isis.t: fixed bug with documented, but unimplemented new( join_subfields_with => 'foo' )
2006-08-26 23:09:20 dpavlin r68
/trunk/scripts/dump_isisdb.pl: changed options to -o offset and -l limit
2006-08-25 16:35:47 dpavlin r67
/trunk/lib/Biblio/Isis.pm: better eliminination of empty subfields
2006-08-25 10:20:58 dpavlin r66
/trunk/lib/Biblio/Isis.pm, /trunk/t/2_isis.t: added hash_filter to to_hash [0.23]
2006-07-13 13:34:30 dpavlin r65
/trunk/lib/Biblio/Isis.pm: documented that hash filter gets also a field number [0.22]
2006-07-13 13:27:27 dpavlin r64
/trunk/t/2_isis.t, /trunk/lib/Biblio/Isis.pm: hash_filter now accepts whole line from record and field number. Removed oddly placed implementation of regexpes (moved to WebPAC via hash_filter as it should...)
2006-07-13 09:13:25 dpavlin r63
/trunk/t/2_isis.t: added global replacements (which are not bound to subfield existence)
2006-07-10 12:01:04 dpavlin r62
/trunk/lib/Biblio/Isis.pm, /trunk/t/2_isis.t: added regexpes to new as option
2006-07-09 21:36:33 dpavlin r61
/trunk/lib/Biblio/Isis.pm, /trunk/t/2_isis.t: initial implementation of split_on_delimiters functionality needed for Webpac. It adds just regexpes hash to to_hash, but I'm still not quite satisfied with it.
2006-07-09 13:20:06 dpavlin r60
/trunk/MANIFEST, /trunk/Makefile.PL, /trunk/MANIFEST.SKIP: cpan target and tweaking of distribution files
2006-07-09 12:22:09 dpavlin r59
/trunk/lib/Biblio/Isis.pm: added link to Biblio::Isis::Manual
2006-07-09 12:18:44 dpavlin r58
/trunk/lib/Biblio/Isis.pm, /trunk/t/2_isis.t: test and fix join_subfields_with
2006-07-09 12:12:57 dpavlin r57
/trunk/lib/Biblio/Isis.pm, /trunk/t/2_isis.t: added join_subfields_with and include_subfields [0.21]
2006-07-08 16:03:52 dpavlin r56
/trunk/t/2_isis.t, /trunk/lib/Biblio/Isis.pm: to_hash now accept parametars
2006-07-08 14:21:49 dpavlin r55
/trunk/t/2_isis.t: add tests for to_hash and to_ascii
2006-07-07 23:45:12 dpavlin r54
/trunk/t/2_isis.t, /trunk/lib/Biblio/Isis.pm: added $isis->mfn, some documentation about version compatibility and few FIXME markers
2006-07-07 23:43:34 dpavlin r53
/trunk/Makefile.PL: better html creation
2006-07-07 22:33:06 dpavlin r52
/trunk/t/001_load.t, /trunk/t/1_load.t, /trunk/t/9_pod-coverage.t,/trunk/t/998_pod-coverage.t,/trunk/t/002_isis.t, /trunk/t/2_isis.t, /trunk/t/9_pod.t,/trunk/t/999_pod.t: renamed tests
2006-07-07 22:29:49 dpavlin r51
/trunk/t/002_isis.t: make test less chatty without debug
2006-07-07 21:11:01 dpavlin r50
/trunk/lib/Biblio/Isis.pm, /trunk/scripts/dump_isisdb.pl: support for repeatable subfields, version bump to 0.20
THIS MIGHT BE INCOMPATIBILE CHANGE for old programs if they always expect to get scalar for values in hash generated by to_hash.
2006-07-07 21:07:44 dpavlin r49
/trunk/t/002_isis.t: sync test with new data
2006-07-07 20:48:57 dpavlin r48
/trunk/data/winisis/BIBL.XRF, /trunk/data/winisis/BIBL.mst: fix difference at MFN 5, 225^a
2006-07-07 10:33:33 dpavlin r47
/trunk/t/002_isis.t: sane debug output
2006-07-07 10:25:02 dpavlin r46
/trunk/t/002_isis.t: dump rec in debug also
2006-07-06 20:31:46 dpavlin r45
/trunk/t/002_isis.t, /trunk/lib/Biblio/Isis.pm: better logging, use Data::Dump if available [0.14]
2006-07-06 11:02:37 dpavlin r44
/trunk/lib/Biblio/Isis.pm: skip empty results of hash_filter
2006-06-29 23:20:14 dpavlin r43
/trunk/scripts/dump_isisdb.pl: actually use -n argument for maximum records to dump
2005-12-09 14:50:52 dpavlin r42
/trunk/scripts/dump_isisdb.pl: added -d path and -n flags
2005-03-12 21:05:29 dpavlin r41
/trunk/lib/Biblio/Isis.pm: better support for ISIS files with null pointers (it will warn and not die)
2005-02-01 15:49:42 dpavlin r40
/trunk/Makefile.PL: I use Test::More and not Test::Simple
2005-01-27 22:01:17 dpavlin r39
/trunk/lib/Biblio/Isis.pm: carp and not croak if MST or XRF file isn't found (calling program will receive undef from new and warning will be issued).
2005-01-12 19:28:41 dpavlin r38
/trunk/t/001_load.t, /trunk/Makefile.PL, /trunk/t/002_isis.t: use File::Spec in attempt to support MacPerl
2005-01-07 20:57:56 dpavlin r37
/trunk/MANIFEST.SKIP, /trunk/lib/Biblio/Isis/Manual.pod, /trunk/lib/Biblio/Isis, /trunk/t/999_pod.t,
/trunk/Isis.pm, /trunk/MANIFEST, /trunk/lib, /trunk/lib/Biblio, /trunk/Makefile.PL, /trunk/lib/Biblio/Isis.pm: re-organize directories, add CDS/ISIS manual -- part about file structure
2005-01-06 20:48:07 dpavlin r36
/trunk/t/002_isis.t, /trunk/scripts/bench.pl, /trunk/t/999_pod.t, /trunk/Isis.pm, /trunk/scripts/dump_isisdb.pl, /trunk/t/001_load.t, /trunk/MANIFEST, /trunk/Makefile.PL,
/trunk/IsisDB.pm: renamed module to Biblio::Isis
2005-01-06 16:27:07 dpavlin r35
/trunk/IsisDB.pm: moved *_cnt function to end of module (so that documetation ends up at end)
2005-01-06 00:40:07 dpavlin r34
/trunk/IsisDB.pm: croak more, carp less (die on anything which is unrecoverable)
2005-01-05 21:23:04 dpavlin r33
/trunk/t/002_isis.t, /trunk/IsisDB.pm: - make filehandles locally scoped
- changed unpack to portable big-endian (so that it works on little-endian machines; tested with PearPC and OpenDarwin)
- added carps where missing
- added binmode when opening files
- any argument to 002_isis.t will show debugging output
2005-01-05 15:46:26 dpavlin r32
/trunk/MANIFEST, /trunk/IsisDB.pm, /trunk/t/998_pod-coverage.t, /trunk/t/002_isis.t, /trunk/scripts/bench.pl, /trunk/scripts/dump_isisdb.pl: new api version
- added count method (instead of calling maxmfn directly in object)
- added POD coverage test
- moved unpack_cnt to be separate method and document it
2005-01-02 22:14:54 dpavlin r31
/trunk/MANIFEST: fixed manifest
2005-01-02 02:41:30 dpavlin r30
/trunk/scripts/cmp.sh: fix
2005-01-01 22:39:27 dpavlin r29
/trunk/scripts/dump_isis.pl, /trunk/scripts/dump_isisdb.pl: renamed example script
2005-01-01 22:29:49 dpavlin r28
/trunk/t/002_isis.t: test read_cnt
2005-01-01 22:29:35 dpavlin r27
/trunk/IsisDB.pm: documentation improvement
2004-12-31 07:16:02 dpavlin r26
/trunk/IsisDB.pm: partial fix for physically deleted records, but logic could benefit from a bit more work since it's not totally complient with ISIS documentation.
2004-12-31 05:43:20 dpavlin r25
/trunk/data/winisis/BIBL.XRF, /trunk/IsisDB.pm, /trunk/data/winisis/BIBL.mst, /trunk/t/002_isis.t: major improvments and new version:
- implement logically deleted records (really!)
- re-ordered values tests using cmp_ok so that reporting is correct,
- return record in fetch even if it's in memory (bugfix)
- removed some obsolete code
2004-12-31 04:24:57 dpavlin r24
/trunk/data/winisis/BIBL.FDT: add missing FDT file
2004-12-31 04:21:21 dpavlin r23
/trunk/IsisDB.pm: important fix: identifiers should be first two characters and than ^, otherwise, leave them alone.
2004-12-31 01:06:21 dpavlin r22
/trunk/t/002_isis.t: test to_ascii
2004-12-31 00:46:33 dpavlin r21
/trunk/t/002_isis.t: fetch tests
2004-12-30 23:17:00 dpavlin r20
/trunk/data/isismarc/isismarc/fdt21.mst, /trunk/data/isismarc, /trunk/data/winisis/BIBL.IFP, /trunk/data/isismarc/isismarc/fmt21.xrf, /trunk/data/isismarc/isismarc/pft21.fst, /trunk/data/isismarc/isismarc/pft21.xrf, /trunk/data/isismarc/isismarc/fmt21.mst, /trunk/data/isismarc/BIBL.XRF, /trunk/data/winisis/BIBL.CNT, /trunk/data/isismarc/isismarc/pft21.mst, /trunk/data/isismarc/isismarc, /trunk/data/isismarc/BIBL.mst, /trunk/data/winisis, /trunk/data/winisis/BIBL.l01, /trunk/data/winisis/BIBL.l02, /trunk/data/winisis/BIBL.N01, /trunk/data/isismarc/isismarc/isismarc2.cip, /trunk/data/winisis/BIBL.XRF, /trunk/data/winisis/BIBL.N02, /trunk/data/isismarc/BIBL.IFP, /trunk/data/winisis/BIBL.mst, /trunk/data, /trunk/data/isismarc/isismarc/fdt21.xrf: added test data
2004-12-30 23:16:20 dpavlin r19
/trunk/t/001_load.t, /trunk/IsisDB.pm, /trunk/t/002_isis.t: added real test (beginning of...) and changed some confesses to croak
2004-12-30 22:40:53 dpavlin r18
/trunk/Makefile.PL, /trunk/IsisDB.pm: Deduce file names and extensions using glob case insesitive. This fixes potential problem with extension names. Extracted code to read .CNT file into read_cnt function.
2004-12-30 19:45:14 dpavlin r17
/trunk/scripts/cmp.sh: small script to compare output from IsisDB with OpenISIS
2004-12-30 17:16:34 dpavlin r16
/trunk/IsisDB.pm: clean up offset calculation (now works with ISIS databases from isis.dll), don't re-fetch MFN if in memory allready, dump debugging messages to STDERR
2004-12-29 22:46:40 dpavlin r15
/trunk/MANIFEST, /trunk/IsisDB.pm, /trunk/scripts/dump_isis.pl, /trunk/scripts/dump_openisis.pl: mostly documentation improvements, but also nicer output and field names output (using .FDT file) in to_ascii if read_fdt is specified
2004-12-29 20:11:34 dpavlin r14
/trunk/Makefile.PL, /trunk/scripts/bench.pl: benchmark hash creation for various implementations
2004-12-29 20:10:59 dpavlin r13
/trunk/scripts/dump_openisis.pl, /trunk/scripts/dump_isis.pl: added debug output which displays raw structures
2004-12-29 20:10:11 dpavlin r12
/trunk/IsisDB.pm: added to_hash method and hash_filter coderef to new constructor to filter data prior to unpacking ISIS data into hash.
2004-12-29 17:03:52 dpavlin r11
/trunk/Makefile.PL, /trunk/IsisDB.pm: documentation and dependency improvements, inline Read32 to get some more performance.
2004-12-29 16:04:07 dpavlin r10
/trunk/IsisDB.pm: skip fields with length 0, OpenIsis produce binary junk in this case.
2004-12-29 16:01:41 dpavlin r9
/trunk/scripts/dump_isis.pl, /trunk/IsisDB.pm: logically deleted records are by default skipped, but can be included using include_deleted option to new
2004-12-29 15:17:59 dpavlin r8
/trunk/IsisDB.pm: another speedup (7845.71/s)
2004-12-29 15:10:34 dpavlin r7
/trunk/IsisDB.pm, /trunk/scripts/dump_isis.pl, /trunk/scripts/bench.pl: added benchmarking script, some speedup (7029.54/s vs 5829.19/s), removed left-overs from php porting (dictionaries are not supported by this module), make dump_isis.pl arguments same as dump_openisis.pl, renamed GetMFN to fetch
2004-12-28 04:07:03 dpavlin r6
/trunk/Makefile.PL,
/trunk/Changes: minor changes and cleanup, create Changes from Subversion repository log
2004-12-28 04:06:29 dpavlin r5
/trunk/t/999_pod.t: test pod
2004-12-28 04:06:04 dpavlin r4
/trunk/scripts/dump_isis.pl: print number of rows
2004-12-28 01:48:44 dpavlin r3
/trunk/IsisDB.pm: remove debugging
2004-12-28 01:41:45 dpavlin r2
/trunk/t/001_load.t, /trunk/MANIFEST, /trunk/IsisDB.pm, /trunk/scripts/dump_isis.pl, /trunk/scripts/dump_openisis.pl: first working version:
- add support for repeatable fields (so all hash values becomed arrays, even with single element)
- scripts to dump CDS/ISIS database using this module and OpenIsis
- to_ascii method which dumps ascii output of record
2004-12-28 00:43:04 dpavlin r1
/trunk/LICENSE, /trunk/t, /trunk/t/001_load.t, /trunk/MANIFEST, /trunk/Makefile.PL, /trunk/scripts, /trunk/Changes, /trunk/IsisDB.pm, /trunk/README, /trunk: Import of old code back from february to actually make it work.