Currently DokuWiki uses a very simple index format. This index format has the disadvantage that the complexity of a search query grows linearly with the size of the wiki (not in combined file sizes but at least in theory it could be as bad as this and it definitely grows linearly in the number of pages). An efficient search index could have search and update times that are logarithmic in the size of the wiki.
B+ trees are an I/O-efficient data structure that is used in modern file and database systems that are relatively easy to understand.
To give you some idea how B+ trees work there is a short summary in the next paragraph, more can be found in the Wikipedia article or this lecture at the end of the second lecture and in the third lecture even more advanced (but in fact still rather simple) techniques for versioned trees are explained.
The basic idea is that instead of a binary tree a tree where each node has many children is used. The data itself is only stored in the leaves of the tree. Typically a node has the size of a (disk) block. When new elements are inserted, nodes are split if they have too many children (and if therefore the parent has too many children, split the parent, too, etc.) and in the same way nodes are merged if they don't have enough children anymore. That way a minimum number of children can be guaranteed (except for the root node) and all leaves are on the same level, i.e. there is no need to rebalance the tree.
If you are looking for B+ tree implementations most implementations are actually binary tree implementations as “btree” is also used as abbreviation for “binary tree”. One pure PHP implementation can be found here but I can't imagine it's index format which is just based on one file that is used in an append-only manner scales really well.
My (Michael Hamann) idea is to simply store each node in a file. This might not be as efficient as using a large block of data but I can imagine that it will be efficient enough.
Some numbers: If you have 256 children per node you can have 16777216 entries with just 3 levels (assuming that that each node has the maximum number of children), but even with 64 children (which would be 256/4, i.e. the minimum number of children for a maximum of 256) just one additional level will give you the same number of entries. This means that for searching, inserting or deleting an item you need to read/write just 3 or 4 files.
An idea for the file format would be to give each entry one line (that way the
file function could be used to get an array of all entries in a node) and to separate key and value/key of the child by a zero byte.
In theory (and also in the implementation for Btrfs as far is I know) it is assumed that keys have a fixed, rather small size. As this implementation wouldn't work on binary level anyway and the actual storage on the disk is rather unknown an interesting idea is to use the actual strings (like the search words) as keys instead of just using a hash. This has the advantage that it would be very easy to do sorted range queries. If the keys can contain newlines or zero bytes they could be converted to hex which would double the size but keep the sort order.
The data in the leaves could be stored inline using base64 encoding where necessary or in additional files that could contain any kind of data.
The search inside a file could be a simple binary search on the array of all lines (minus a few lines at the beginning for metadata).
In order to keep the files small the node identifiers should be kept short. A simple idea would be to use random hex values between zero and something like 15 times the maximum node size to the power of the current tree height and to generate new random ids till the node with that id doesn't exist anymore (which should normally happen after one or two tries as the actual number of nodes should be a lot lower).
As at maximum two nodes need to be kept in memory the memory consumption of the B+ tree would be rather low, too.
The trees could be used for all kinds of data storage and might replace the need for a sqlite database in some cases.
For the search index the pages could be directly stored in the leaves (instead of just storing page ids) in order to get rid of the additional query for the actual page names. This will significantly increase the index size but as only small parts of the index are loaded at the same time this shouldn't be a real problem. The only disadvantage is that while wildcard searches with a
* at the end should be really efficient wildcard searches with a
* at the beginning will be really slow. The performance could be increased by using an additional tree with words in reverse order but this won't fix wildcard queries with
* at both ends. A solution might be to have simple word lists as trees with large leaves so they can be more efficiently scanned.
In plugins like the tag plugin the pages could not only be indexed by tag but also by the creation time of the page (using
tag cdata as index) and then one could efficiently list pages that are tagged with a certain tag by creation date including an efficient pagination that could be used for the blog plugin (this could use the metadata index and doesn't need to use a tree directly but with the current index implementation this would be rather slow). The discussion plugin could use the index in order to provide a list of the most recent comments that can be efficiently updated when comments are deleted or published/hidden or it could also use a tree for storing the actual comments in order to get an efficient pagination. The advanced concepts for versioned trees could even be used in order to keep older versions of comments.
An efficient tree data structure could provide new possibilities for DokuWiki while at the same time it would (most probably) dramatically improve the scalability of the search index. The file format for the tree could still be text-based and rather simple and thus fit into DokuWiki.
I (Michael Hamann) have already created some bits of code but it's far from having even a finished API design. As this is one of a lot of things I want to do I don't know when I'll find the time to finish this. Any help is welcome so feel free to contact me if you are interested.
As this could also be a plugin and the main implementation of the tree is independent from the question if this will be part of DokuWiki or not there is no need to decide right now if this will be included in DokuWiki or not. The first implementation should be as a plugin, then the actual performance can be tested and based on these tests there can be a discussion if this should be integrated in DokuWiki or if this should remain a plugin (or if the idea doesn't work at all and needs to be changed or dropped).
I was looking at this and I recalled DBA. It's a php abstraction layer for file-based databases, and from what I've seen there is always at least one driver available.
I know one of the DBA drivers uses btree by default (maybe more), which means logarithmic search and update times, and it's C so it will be faster that a php class. This seem to fit DBA, so I suggest trying it and maybe avoid “reinventing the wheel”. ;)
— flaviojs 2018-10-04 18:02