==== AMB ARCHIVE FORMAT ==== last update: 2020-12-14 The latest version of this file can be found on the AMB project's homepage: An AMB file (Ancient Machines Book) is an extremely lightweight file format meant to store any kind of hypertext documentation that may be comfortably viewed even on the most ancient PCs: technical manuals, books, etc. Think of it as a retro equivalent of a *.CHM help file. The AMB format is designed to allow for some limited formatting, support internal links and require very little processing power to read, so a reader may be run even on the oldest IBM PC. The format also strives for simplicity of implementation. The AMB file is a container - one could say it is a very simplistic archive format. It starts with a 4-bytes format signature (magic value) "AMB1". Then comes a 2-bytes number that tells how many files are present in the container, followed by the list of all files: each file is described by a file entry. All values are little-endian. offset 0 format signature: "AMB1" 4 files count (16-bit value) 6 FILE ENTRY #1 FILE ENTRY #2 FILE ENTRY #3 .... DATA Each file entry is a 20-bytes structure: offset 0 filename, 12 characters, zero-padded ("FILE.EXT\0\0\0\0") 12 offset where this file starts (32 bits) 16 file length, in bytes (16 bits) 18 BSD sum (16-bit) of the file The AMB archive is expected to contain a set of AMA (Ancient Machines Article) files, and optionally a title file, an index dictionary and a codepage map. An AMB archive must contain at least one AMA file named "index.ama" - this is the first file that an AMB reader will try loading. Note: Names of files contained in an AMB archive are to be processed in a case insensitive way and must be composed exclusively of 7-bit characters. === DOCUMENT TITLE =========================================================== The AMB title is a string that may be displayed as the document's main title. To set such title, the AMB archive has to contain a file named simply 'title' that would contain the text. The title string should not be longer than 64 characters, anything longer might be truncated by the reader. === AMA FORMAT =============================================================== The AMA format is a text-based file format. For guaranteed interoperability with old machines, its maximum allowed size is 65535 bytes (ie. 2^16 - 1). Larger contents must be segmented into a set of two or more AMA articles. An AMB reader must display content with a 78-characters width, hence an AMA article must not contain any line longer than 78 displayable characters.Lines longer than this limit will be truncated by the client reader. AMA articles may contain control codes. A control code is a characters pair, where the first is a percent (%) character. Possible control codes: %t normal text follows (default state) %h heading follows %l link follows (filename ended by a ':', followed by a description) %! notice/warning follows %b boring text follows (usually displayed grey on grey) %% display a percent character It is important to note that the current text mode is reset to %t at the end of every line, hence there is no need to prefix a line of text with %t. Line endings may be either LF or CR/LF. The former is recommended, as it is more compact. TAB control codes (ASCII decimal value 9) are NOT allowed in AMA files. Whenever an external URL appears in an AMA file (for example a link to a web page, to a ftp resource or to a gopher hole) it is encouraged to be enclosed between <> characters. Example: . This is only a typesetting recommendation based on RFC 3986, it is not part of the AMA specification. Following it would, however, make it much easier for modern AMB readers to detect such links automatically and make them clickable. === CODEPAGE ENCODING ======================================================== Since ancient computers are displaying text as 8-bit characters due to the design of early video adapters, AMA files are expected to contain 8-bit text as well. The exact codepage is unspecified by this format definition and depends on the document's target. To ease displaying of AMB books on modern (unicode-enabled) platforms, any AMB file that contains non-7-bit characters SHOULD also contain a file named "unicode.map". This file contains a sequence of 128 16-bit values, mapping bytes of the range 128..255 into unicode datapoint values. Such file can be readily output by the utf8tocp program . === INDEX DATA =============================================================== On top of AMA files, the AMB archive may contain a file named DICT.IDX. This file, if it exists, provides indexing metadata to allow the client to perform fast and efficient full-text searches across the AMB book. The index file contains a hash table: a serie of 256 16-bit indexes, where each index points to a region of the index structure that contains a list of words (LoW). The index (0..255) itself is an 8 bits hash based on the length of the word and its characters. The checksum is made of two nibbles: LC. The high nibble (L) is the length of the word minus 2, while the low nibble (C) is a simple checksum of all the word's characters XORed together. This algorithm can be formalized as follows: ((wordlen - 2) << 4) | ((a & 15) XOR (b & 15) XOR (...)) For example, the word "Disk" would end up being indexed under value 0x25, because: ((4 - 2) << 4) | ((D & 15) XOR (i & 15) XOR (s & 15) XOR (k & 15)) translates to: (2 << 4) | (4 XOR 9 XOR 3 XOR 11) which leads to: 32 | 5 resulting in: 37 = 0x25 After the index we can find the pointer to the words list. A pointer is a 16 bits file offset from the index structure start. It needs to be noted that words of less than 2 characters and more than 17 characters cannot be indexed. The presented algorithm has also the interesting side-effect of indexing low and high caps of the ranges a..z and A..Z identically. An important limitation is the fact that the list of words (LoW) is restricted by the 16-bit addressing offset, which means that all LoWs must start at an offset within the first 64 KiB of the file. Now that we know the offset at which our LoW starts, we can read the words. First go to the offset, and read a single 16 bits word. Its value contains the number of words in the list. Then, read the words one after another (note that all words in the list have the same length, and you know this length already). Words are always written in lower case characters. Each word is followed by a 1-byte value that tells how many files the word has been found in. Then, that many 32-bit file identifiers follow. index format: * List of words * xx number of words in the list ? word x how many files the word is present in xxxx file identifier 1 xxxx file identifier 2 ... xxxx file identifier n (other 255 lists of words follow) * hash table * xx offset of the LoW for words that match hash 0x00 xx offset of the LoW for words that match hash 0x01 ... xx offset of the LoW for words that match hash 0xff ====================================================================== EOF ===