This document assumes you are familiar with C++ programming and BeOS programming. However it should be understandable even if you aren't.
If yes, MUSCLE may not be for you. MUSCLE defines its own byte-stream formats and messaging protocol, and is not generally compatible with other software protocols (such as IRC or FTP). If your pre-existing protocol follows a "message-stream-over-TCP-stream" design pattern, you can customize MUSCLE (by defining your own subclass of AbstractMessageIOGateway) to make it use your protocol; if not, you're probably better off coding to lower level networking APIs.
MUSCLE does all of its data transfer by serializing PortableMessages over TCP streams. If your application is a particularly high-performance one (such as video streaming), MUSCLE may not be able to provide you with the efficiency you need. In this case, you might use MUSCLE TCP streams for your control data only, and hand-code separate routines for your high-bandwidth/low-latency packets. I've used this pattern (TCP + UDP) in audio-over-Internet programs before and it works well.
In addition, you should be aware of the CPU and memory overhead added by MUSCLE to your communications. While MUSCLE has been designed for efficiency, and will not make unreasonable demands on systems that run it, it is necessarily somewhat less efficient that straight byte-stream TCP programming. Specifically:
There are two common ways to use the MUSCLE package: you can have each client connect to a muscled server running on a central server system, and use it to communicate with each other indirectly... or you can have clients connect to each other directly, without using a central server. Each style of communication is useful in the right context, but it is important to choose the one that best fits what your app is going to do. Using the muscled in a client/server pattern is great because it solves several problems for you: it provides a way of communicating with other client computers without first needing to know their host addresses (etc), it gives you intelligent "broadcast" and "multicast" capabilities, and it provides a centralized area to maintain "shared state information" amongst all clients. On the down side, because all data must travel first to the central server, and from there on to the other client(s), message passing through the server is only half as fast (on average) as a direct connection to another client. Of course, to get the best of both worlds, you can use a hybrid system: each client connects to the server, and uses the server to find out the host addresses of the other clients; after that, it can connect to the other clients directly whenever it wants.
The MUSCLE package consists of tens of classes, most of which are needed by the MUSCLE server, some of which are needed by MUSCLE clients, and some of which may be useful to you in their own right, as generic utility classes. For most applications, the standard MUSCLE server will be adequate: you can just compile it and run it, and concentrate solely on the client side of your app. For some specialized apps, you may want to make your own "custom" server--you can do this easily by creating your own subclass of AbstractReflectSession. Of course, if you do this you won't be able to use any of the "general purpose" muscled servers that may be available...
To connect to the server, create a new MessageTransceiverThread object, and call StartConnectThread() on it. The MessageTransceiverThread constructor takes a BMessenger; pass in a BMessenger that points to your favorite BLooper (or whatever you're using to process BMessages). StartConnectThread() will return immediately, but when the background TCP thread connects to the server (or fails to do so) it will send a message to your BMessenger to notify you. Note that it's okay to allocate the MessageTransceiverThread object on the heap or on the stack, and it's okay to delete the MessageTransceiverThread object at any time... but when you delete it your connection will be closed.
To send a message to the server, just call the MessageTransceiverThread's SendOutgoingMessage() method. This method will return immediately, but the message you specify will be placed in an outbound-message queue for sending as soon as possible. The only tricky part (other than the fact that you have to send PortableMessages, not BMessages--see ConvertMessages.h if you wish to convert before sending) is that you specify the message to send by means of a PortableMessageRef object, rather than by value or by pointer. For example:
PortableMessage * newMsg = new PortableMessage('HELO'); /* must be allocated with new operator */ newMsg->AddString("testing", "please"); PortableMessageRef msgRef(newMsg, NULL); if (myTransceiver->SendOutgoingMessage(msgRef) != B_NO_ERROR) printf("Couldn't send message!\n"); /* Do NOT delete (newMsg) ... msgRef will do it for you when the time comes! */
Whenever a new PortableMessage arrives from the server, a PORTABLE_MESSAGES_RECEIVED BMessage will be sent to you via the BMessenger you specified in the MessageTransceiverThread constructor. When you receive such a message, do something like this:
PortableMessageRef msgRef; while(myTransceiver->GetNextIncomingMessage(msgRef) == B_NO_ERROR) { PortableMessage * pMsg = msgRef.GetItemPointer(); HandleMessage(pMsg); // Do whatever you gotta do /* do NOT delete (pMsg). It will be deleted for you. */ };
When you've had enough of chatting with the server, you can end your session by simply deleting the MessageTransceiverThread object. It's safe to do this at any time.
In addition to connecting to MUSCLE servers, you can use MessageTransceiverThread objects to accept incoming connections from other programs (via MessageTransceiverThread::StartAcceptThread()). You can even connect one MessageTransceiverThread to another (by having one call StartAcceptThread() and the other call StartConnectThread()). Lastly, if you want to use a "custom" connection (e.g. with your own streaming protocol), you can define your own PortableMessageIOGateway factory function (via MessageTransceiverThread::SetGatewayFactoryFunc()), or call StartThread() and pass in your own PortableMessageIOGateway and pre-connected socket to use.
MUSCLE 1.83 adds support for the Qt GUI toolkit via the qtsupport folder and its QMessageTransceiverThread class. This class is very similar to the MessageTransceiverThread class, only modified to use Qt's signals-and-slots system instead of sending Message objects. See the README-Qt.txt file and the QMessageTransceiverThread.h header file for more details about Qt client support.
For code that needs to run on platforms other than BeOS or AtheOS (or even for BeOS/AtheOS code where you don't want to spawn an extra thread), you can use the single-threaded messaging API, as defined by the PortableDataIO and PortableMessageIOGateway classes. These classes allow you to decouple your TCP data transfer calls from your message processing calls, and yet still keep the same general message-queue semantics that we know and love.
To create a connection to the MUSCLE server, you would first make a TCP connection using standard BSD sockets calls (see portablereflectclient.cpp for an example of this). Once you have a connected socket, you would use it to create a TCPSocketDataIO, which you would use to create a PortableMessageIOGateway object:
PortableMessageIOGateway gw(new TCPSocketDataIO(mysocketfd, false));
This gateway allows you to enqueue outgoing PortableMessage or dequeue incoming PortableMessages at any time by calling AddOutgoingMessage() or GetNextIncomingMessage(), respectively. These methods are guaranteed never to block. Like the MessageTransceiverThread, the PortableMessageIOGateway uses PortableMessageRef objects to handle the freeing of PortableMessages when they are no longer in use.
To actually send and receive TCP data, you need to call DoOutput() and DoInput(), respectively. These methods will send/receive as many bytes of TCP data as they can (without blocking), and then return B_NO_ERROR (unless the connection has been cut, in which case they will return B_ERROR). Because these methods never block (unless your TCPSocketDataIO is set to blocking I/O mode, which in general it shouldn't be), you will need to employ select() or some other method to keep your event loop from using 100% CPU time while waiting to send or receive data. Here is an example event loop that does this:
int mysocketfd = Connect("servername.serverdomain.com", 2960); // get a fresh TCP socket connection PortableMessageIOGateway gw(new TCPSocketDataIO(mysocketfd, false)); bool keepGoing = true; struct fd_set readSet, writeSet; while(keepGoing) { FD_ZERO(&readSet); FD_ZERO(&writeSet); FD_SET(mysocketfd, &readSet); if (gw.HasBytesToOutput()) FD_SET(mysocketfd, &writeSet); if (select(mysocketfd+1, &readSet, &writeSet, NULL, timeout) < 0) { perror("select() failed"); keepGoing = false; } bool readyToWrite = FD_ISSET(mysocketfd, &writeSet); bool readyToRead = FD_ISSET(mysocketfd, &readSet); /* Do as much TCP I/O as possible without blocking */ uint32 readBytes = 0, wroteBytes = 0; bool writeError = ((readyToWrite)&&(gw.DoOutput(readBytes) != B_NO_ERROR)); bool readError = ((readyToRead)&&(gw.DoInput(wroteBytes) != B_NO_ERROR)); if ((readError)||(writeError)) keepGoing = false; /* handle any received messages */ PortableMessageRef msgRef; while(gw.GetNextIncomingMessage(msgRef) == B_NO_ERROR) { PortableMessage * msg = msgRef.GetItemPointer(); printf("Received incoming TCP Message:\n"); msg->PrintToStream(); // handle message here } } /* note: don't call closesocket(mysocketfd), as the TCPSocketDataIO destructor will do it for you */ printf("Connection was closed!\n");
Alternatively, you can set the blocking-I/O parameter in the TCPSocketDataIO object to true, and use blocking I/O instead. If you do that, then you don't have to deal with the complexities of select()... but then it becomes difficult to coordinate sending and receiving at the same time (i.e. how do you call DoOutput() if you are blocked waiting for data in DoInput()?)
Regardless of whether you are sending and receiving messages with a MessageTransceiverThread with direct calls to a PortableIOGateway, the result looks the same to the program at the other end of the TCP connection: It always sees a just a sequence of PortableMessage objects. How that program acts on those messages is of course up to it. However, the servers included in this archive do have some minimal standard semantics that govern how they handle the messages they receive. The following sections describe those semantics.
If you are connected to a MUSCLE server that was compiled to use the DumbReflectSession class to handle its connections, then the semantics are extremely simple: Any PortableMessage you send to the server will be sent on, verbatim, to every other connected client. (Sort of a high-level version of Ethernet broadcast packets). This may be useful in some situations, but for applications where bandwidth is an issue you'll probably want to use the "regular" server with StorageReflectSession semantics.
The StorageReflectSession-based server (a.k.a. "muscled") is much more powerful than the DumbReflectSession server, for two reasons: First, it makes intelligent decisions about how to route client messages, so that your messages only go to the clients you specify. The second reason is because this server allows you to store messages (semi-permanently; they are retained for as long as you remain connected) in the server's RAM, where other clients can access them without having to communicate with you directly. If you imagine a situation where the server is running on 100Mbps Ethernet, and the clients are connecting through 28.8 modems, then you can see how this can be useful.
The StorageReflectSession server maintains a single tree data structure very much like the filesystem of your average desktop computer. Although this data structure exists only in memory (nothing is ever written to the server's disk), it shares many things in common with a multi-user file system. Each node in the tree has an ASCII label that uniquely identifies it from its siblings, and also contains a single PortableMessage object, which client machines may get or set (with certain restrictions). The root node of the tree contains no data, and is always present. Nodes underneath the root, on the other hand, may appear and dissappear as clients connect and disconnect. The first level of nodes beneath the root are automatically created whenever a client connects to the server, and are named after the host IP address of the client machine that connected. (For example, "192.168.0.150"). The second level of nodes are also automatically created, and these nodes are given unique names that the server makes up arbitrarily. (This second level is necessary to disambiguate multiple connections coming from the same host machine) The number of level 2 nodes in the tree is always the same as the number of currently active connections ("sessions") on the server.
___________'/'_________ (level 0 -- "root") | | 192.168.0.150 132.239.50.13 (level 1 -- host IP addresses) | | | 3217617 3217618 1829023 (level 2 -- unique session IDs) | | SomeData MoreData (level 3 -- user data nodes) | | RedFish BlueFish (level 4)
Levels 1 and 2 of the tree reflect two simultaneous sessions connected from 192.168.0.150, and one connection from 132.239.50.13. In levels 3 and 4, we can see that the sessions have created some nodes of their own. These "user-created" nodes can be named anything you want, although no two siblings can have the same name. Each client may create data nodes only underneath its own "home directory" node in level 2--you aren't allowed to write into the "home directories" of other sessions. However, any client may read the contents of any node in the system.
As in any good filesystem (e.g. UNIX's), nodes can be identified uniquely by a node-path. A node-path is simply the concatenation of all node names from the root to the node, separated by '/' characters. So, the tree in the above example contains the following node-paths:
/ /192.168.0.150 /192.168.0.150/3217617 /192.168.0.150/3217617/SomeData /192.168.0.150/3217618 /132.239.50.13 /132.239.50.13/1829023 /132.239.50.13/1829023/MoreData /132.239.50.13/1829023/MoreData/RedFish /132.239.50.13/1829023/MoreData/BlueFish
One thing most clients will want to do is create one or more new nodes in their subtree on the server. Since each node contains a PortableMessage, creating a node is the same thing as uploading data to the server. To do this, you send the server a PR_COMMAND_SETDATA message. A single PR_COMMAND_SETDATA message can set any number of new nodes. For each node you wish to set, simply AddMessage() the value you wish to set it to, with a field name equal to the path of the node relative to your "home directory". For example, here's what the client from 132.239.50.13 could have done to create the MoreData, RedFish, and BlueFish nodes under his home directory:
PortableMessage redFishMessage('RedF'); // these messages could contain data PortableMessage blueFishMessage('BluF'); // you wish to upload to the server PortableMessage * msg = new PortableMessage(PR_COMMAND_SETDATA); PortableMessageRef msgRef(msg, NULL); // ensures that (msg) will be deleted later msg->AddMessage("MoreData/RedFish", redFishMessage); msg->AddMessage("MoreData/BlueFish", blueFishMessage); myMessageTranceiver->SendOutgoingMessage(msgRef);Note that the "MoreData" node did not need to be explicitely created in this message; the server will see that it doesn't exist and create it before adding RedFish and BlueFish to the tree. (Nodes created in this way have empty PortableMessages associated with them). If 132.239.50.13 later wants to change the data in any of these nodes, he can just send another PR_COMMAND_SETDATA message with the same field names, but different messages.
If you want to find out the current state of one or more nodes on the server, you should send a PR_COMMAND_GETDATA message. In this PR_COMMAND_GETDATA message, you should add one or more strings to the PR_NAME_KEYS field. Each of these strings may specify the full path-name of a node in the tree that you are interested in. For example:
PortableMessage * msg = new PortableMessage(PR_COMMAND_GETDATA); PortableMessageRef msgRef(msg, NULL); // ensures that (msg) will be deleted later msg->AddString(PR_NAME_KEYS, "/192.168.0.150/3217617/SomeData"); msg->AddString(PR_NAME_KEYS, "/132.239.50.13/1829023/MoreData/RedFish"); msg->AddString(PR_NAME_KEYS, "/132.239.50.13/1829023"); myMessageTranceiver->SendOutgoingMessage(msgRef);Soon after you sent this message, the server would respond with a PR_RESULT_DATAITEMS message. This message would contain the values you asked for. Each value is stored in a separate message field, with the field's name being the full node-path of the node, and the field's value being the PortableMessage that was stored with that node on the server. So for the above request, the result would be:
PortableMessage: what = PR_RESULT_DATAITEMS numFields = 3 field 0: name = "/192.168.0.150/3217617/SomeData" value = (a PortableMessage) field 1: name = "/132.239.50.13/1829023/MoreData/RedFish" value = (a PortableMessage) field 2: name = "/132.239.50.13/1829023" value = (an empty PortableMessage)
The above method of retrieving data is okay as far as it goes, but it only works if you know in advance the node-path(s) of the data you want. But in the real world, you won't usually know e.g. the host addresses of other connected clients. Fortunately, the MUSCLE server understands wildcard patterns in the node-paths you send it. Wildcarding allows you to specify a pattern to watch for rather than a particular unique string. A detailed discussion of pattern matching is outside the scope of this document, but if you've used UNIX much at all you probably have a good idea how they work. For example, say we wanted to know the host address of every machine connected to the server:
PortableMessage * msg = new PortableMessage(PR_COMMAND_GETDATA); PortableMessageRef msgRef(msg, NULL); // ensures that (msg) will be deleted later msg->AddString(PR_NAME_KEYS, "/*"); myMessageTranceiver->SendOutgoingMessage(msgRef);The "/*" pattern in the PR_NAME_KEYS field above matches both "/192.168.0.150" and "/132.239.50.13" in the tree, so we would get back the following:
PortableMessage: what = PR_RESULT_DATAITEMS numFields = 2 field 0: name = "/192.168.0.150" value = (an empty PortableMessage) field 1: name = "/132.239.50.13" value = (an empty PortableMessage)Or, perhaps we want to know about every node in every session's home directory that starts with the letters "Som". Then we could do:
msg->AddString(PR_NAME_KEYS, "/*/*/Som*");And so on. And of course, you are still able to add multiple PR_NAME_KEYS values to a single PR_COMMAND_GETDATA message; the PR_RESULT_DATAITEMS message you get back will contain data for any node that matches at least one of your wildcard patterns.
One more detail: Since patterns that start with "/*/*" turn out to be used a lot, they can be made implicit in your path requests. Specifically, any PR_NAME_KEYS value that does not start with a leading '/' character is taken to have an implicit '/*/*/' prefix. So doing
msg->AddString(PR_NAME_KEYS, "Gopher");is semantically equivalent to doing
msg->AddString(PR_NAME_KEYS, "/*/*/Gopher");
PortableMessage * msg = new PortableMessage(PR_COMMAND_REMOVEDATA); PortableMessageRef msgRef(msg, NULL); // ensures that (msg) will be deleted later msg->AddString(PR_NAME_KEYS, "MoreData/RedFish"); msg->AddString(PR_NAME_KEYS, "MoreData/BlueFish"); msg->AddString(PR_NAME_KEYS, "MoreData"); myMessageTranceiver->SendOutgoingMessage(msgRef);or this:
msg->AddString(PR_NAME_KEYS, "MoreData"); /* Removing a node implicitely removes its children */or even just this:
msg->AddString(PR_NAME_KEYS, "*"); /* wildcarding */You can only remove nodes within your own subtree. You can add as many PR_NAME_KEYS strings to your PR_COMMAND_REMOVEDATA message as you wish.
PortableMessage * msg = new PortableMessage('HELO'); PortableMessageRef msgRef(msg, NULL); // ensures that (msg) will be deleted later msg->AddString(PR_NAME_KEYS, "/192.168.0.150/*"); myMessageTranceiver->SendOutgoingMessage(msgRef);would cause your 'HELO' message to be sent to all sessions connecting from 192.168.0.150. Or, more interestingly:
msg->AddString(PR_NAME_KEYS, "/*/*/Gopher");Would cause your message to be sent to all sessions who have a node named "Gopher" in their home directory. This is very handy because it allows sessions to "advertise" for which types of message they want to receive: In the above example, everyone who was interested in your 'HELO' messages could signify that by putting a node named "Gopher" in their directory.
Other examples of ways to address your messages:
msg->AddString(PR_NAME_KEYS, "/*/*/J*")Will send your message to all clients who have a node in their home directory whose name begins with the letter 'J'.
msg->AddString(PR_NAME_KEYS, "/*/*/J*/a*/F*")This (contrived) example would send your message only to clients who have something like "Jeremy/allen/Friesner" present...
msg->AddString(PR_NAME_KEYS, "Gopher");This is equivalent to the "/*/*/Gopher" example used above; if no leading slash is present, then the "/*/*/" prefix is considered to be implied.
msg->AddString(PR_NAME_KEYS, "Gopher"); msg->AddString(PR_NAME_KEYS, "Bunny");This message will go to clients who have node named either "Gopher" or "Bunny" in their home directory. Clients who have both "Gopher" AND "Bunny" will still only get one copy of this message.
If your message does not have a PR_NAME_KEYS field, the server will check your client's parameter set for a string parameter named PR_NAME_KEYS. If this parameter is found, it will be used as a "default" setting for PR_NAME_KEYS. If a PR_NAME_KEYS parameter setting does not exist either, then the server will resort to its "dumb" behavior: broadcasting your message to all connected clients.
To deal with this problem, the StorageReflection Server allows your client to set "subscriptions". Each subscription is nothing more than the node-path of one or more nodes that your client is interested in. The path format and semantics of a subscription request are exactly the same as those in a PR_COMMAND_GETDATA message, but the way you compose them is quite different. Here is an example:
PortableMessage * msg = new PortableMessage(PR_COMMAND_SETPARAMETERS); PortableMessageRef msgRef(msg, NULL); // ensures that (msg) will be deleted later msg->AddBool("SUBSCRIBE:/*/*", true); myMessageTranceiver->SendOutgoingMessage(msgRef);The above is a request to be notified whenever the state of a node whose path matches "/*/*" changes (which is actually the same as being notified whenever another session connects or disconnects--very handy for some applications). Note that the subscription path is part of the field's name, not the field's value. Note also that the field has been added as a boolean. That actually doesn't matter; you can add your subscribe request as any type of data you wish--the value won't even be looked at, it's only the field's name that is important.
As soon as your PR_COMMAND_SETPARAMETERS message is received by the server, it will send back a PR_RESULT_DATAITEMS message containing values for all the nodes that matched your subscription path(s). In this respect, your subscription acts similarly to a PR_COMMAND_GETDATA message. But the difference is that the server keeps your subscription strings "on file", and afterwards, every time a node is created, changed, or deleted--and its node-path matches at least one of your subscription paths, the server will automatically send you another PR_COMMAND_DATAITEMS message containing the message(s) that have changed, and their newest values. Note that each PR_COMMAND_DATAITEMS message may have more than one changed node in it at a time (i.e. if someone else changes several nodes at a time).
When the server wishes to notify you that a node matching one of your subscription paths has been deleted, it will do so by adding the node-path of the deceased node to the PR_NAME_REMOVED_DATAITEMS field of the PR_RESULT_DATAITEMS message it sends you. Again, there may be more than one PR_NAME_REMOVED_DATAITEMS value in a single PR_RESULT_DATAITEMS message.
PortableMessage * msg = new PortableMessage(PR_COMMAND_SETPARAMETERS); PortableMessageRef msgRef(msg, NULL); // ensures that (msg) will be deleted later msg->AddBool(PR_NAME_REFLECT_TO_SELF, true); // enable wildcard matching on my own subdirectory msg->AddString(PR_NAME_KEYS, "/*/*/Gopher"); // set default message forwarding pattern msg->AddBool("SUBSCRIBE:/132.239.50.*", true); // add a subscription to nodes matching "/132.239.50.*" msg->AddBool("SUBSCRIBE:*", true); // add a subscription to nodes matching "/*/*/*" msg->AddInt32("Glorp", 666); // other parameters like this will be ignored myMessageTranceiver->SendOutgoingMessage(msgRef);The fields included in your message will replace any like-named fields already existing in the parameter set. Any fields in the existing parameter set that aren't specified in your message will be left unaltered.
PortableMessage * msg = new PortableMessage(PR_COMMAND_REMOVEPARAMETERS); PortableMessageRef msgRef(msg, NULL); // ensures that (msg) will be deleted later msg->AddString(PR_NAME_KEYS, PR_NAME_REFLECT_TO_SELF); // disable wildcard matching on my own subdirectory msg->AddString(PR_NAME_KEYS, "SUBSCRIBE:*"); // removes ALL subscriptions (compare with "SUBSCRIBE:\*" which would only remove one) myMessageTranceiver->SendOutgoingMessage(msgRef);
PortableMessage * msg = GetMessagePool()->GetObject(); msg->what = 'HELO'; PortableMessageRef msgRef(msg, GetMessagePool()); if (gateway.AddOutgoingMessage(msgRef) != B_NO_ERROR) printf("Error adding outgoing message???\n");The above code sends a PortableMessage without having to do any memory allocations. When the message has been sent, the PortableMessageRef will automatically return the message and its reference-count to the pools you specified.
Don't use the above code in a multithreaded environment, it will suffer from race conditions. Under BeOS or AtheOS, you can avoid those by using thread-safe pools instead:
PortableMessage * msg = MessageTransceiverThread::GetMessagePool()->GetObject(); msg->what = 'HELO'; PortableMessageRef msgRef(msg, MessageTransceiverThread::GetMessagePool()); if (_transceiver->SendOutgoingMessage(msgRef) != B_NO_ERROR) printf("Error sending outgoing message???\n");Of course, you don't have to use message pools at all; they are only there to improve efficiency. The 'old' way continues to work in all cases:
PortableMessage * msg = new PortableMessage('HELO'); PortableMessageRef msgRef(msg, NULL); if (gateway.AddOutgoingMessage(msgRef) != B_NO_ERROR) printf("Error adding outgoing message???\n");One last efficiency hint: When using the AddMessage(), and FindMessage() methods of the PortableMessage class, use the methods that take PortableMessageRefs as arguments, rather than the (BeOS-style) methods that take PortableMessages. This will save the computer from having to copy a PortableMessage on each call, speeding things up dramatically.
To enable indexing for a node, simply add child nodes to it using PR_COMMAND_INSERTORDEREDDATA instead of the usual PR_COMMAND_SETDATA messages. In a PR_COMMAND_INSERTORDEREDDATA message, you specify the parent node that the new child node is to be added to, and the data/PortableMessage the new child node will contain--but muscled will be the one to assign the new child node an (algorithmically generated) name. Generated names are guaranteed to start with a capital 'I'. Muscled will add the child node in front of a previously added indexed child whose name you specify, or will add it to the end of the index if you specify sibling name that is not found in the index.
Here is an example PR_COMMAND_INSERTORDEREDDATA message that will add an indexed child node to the node "myNode":
PortableMessage imsg(PR_COMMAND_INSERTDATA); imsg.AddString(PR_NAME_KEYS, "myNode"); // specify node(s) to add insert the child node(s) under PortableMessage childData('Chld'); // any data you want to store can be placed in this message.... imsg.AddMessage("I4", childData); // add the new node before I4, if I4 exists. Else append to the end.If myNode already contains an indexed child with the name I4, the new node will be inserted into the index just before I4. If I4 is not found in the index, the new node will be appended to the end of the index. If you want to be sure that your new child is always added to the end of the index, you can just AddMessage() using a field name that doesn't start with a capital I.
You are allowed to specify more than one parent node in PR_NAME_KEYS (either via wildcarding, or via multiple PR_NAME_KEYS values)--this will cause the same child nodes to be added to all matching nodes. You are also allowed to specify multiple child messages to add in a single INSERTORDEREDDATA message (either by adding sub-messages under several different field names, or by adding multiple sub-messages under a single field name).
When a node contains an index (i.e. when it has at least one child under it that was added via PR_COMMAND_INSERTORDERREDDATA) any clients that are subscribed to that node will receive PR_RESULT_INDEXUPDATED messages when the index changes. These messages allow the subscribed clients to update their local copy of the index incrementally. Each PR_RESULT_INDEXUPDATED message will contain one or more string fields. Each string field's name will be the fully qualified path of the indexed node whose index has been changed. Each string/value in a given string field represents a single operation on the index. An example message might look like this:
PortableMessage: this=0x800c32f8, what='!Pr4' (558920244/0x21507234), entryCount=1, flatSize=79 Entry: Name=[/spork/0/hello], CountItems()=4, TypeCode()='CSTR' (1129534546) flatSize=40 0. [c] 1. [i0:I0] 2. [i1:I1] 3. [r1:I1]The first letter of each string is an opcode, one of the INDEX_OP_* constants defined in StorageReflectConstants.h. Here we see that the first instruction has a 'c', or INDEX_OP_CLEARED, indicating that the index was cleared. The next instruction, i0:I0 starts with a INDEX_OP_ENTRYINSERTED, and indicates that a child node named I0 was inserted at index 0. After that, a child node named I1 was inserted at index 1. Lastly, the INDEX_OP_ENTRYREMOVED opcode ('r') indicates that the node at index 1 (I1) was then removed from the list. By parsing these instructions, the client can update its own local index to follow that of the server.
Note that the index only contains node names and ordering information; the actual node data is kept in the child nodes, in the normal fashion. So most clients will want to subscribe to both the indexed parent node, and its children, in order to display the data that the index refers to.
An indexed node will also send the contents of its index (in the form of a PR_RESULT_INDEXUPDATED message with a INDEX_OP_CLEARED op code, followed by one or more INDEX_OP_ENTRYINSERTED opcodes) to any client that requests it via the PR_COMMAND_GETDATA command. This message is sent in addition to the regular PR_RESULT_DATAITEMS message.
To remove a node entry from the index, simply remove the delete the child node in the normal fashion, using a PR_COMMAND_REMOVEDATA message. You can, of course, update a child node's data using a PR_COMMAND_SETDATA message, without affecting its place in the index.
One last note is that index data is always sent to all clients that ask for it; it is even sent to the client who created/owns the indexed node. That is to say, the PR_NAME_REFLECT_TO_SELF attribute may be considered always set as far as index data is concerned. This is because the index is created on the server side, and so not even the client-side initiator of the index creation can be exactly sure of the index's state. It's best for clients not to make assumptions about the contents of the index, and update their local indices based solely on the PR_RESULT_INDEXUPDATED messages they receive from the server.