-
Notifications
You must be signed in to change notification settings - Fork 32
DIS History
DIS originated from a Defense Advanced Research Agency (DARPA) project in the late 1980’s called SIMNET. At the time TCP/IP and high speed networks were just getting their legs, computers and networks were becoming powerful enough to do the computational operations needed, and 3D graphics was in its infancy.
A screen capture from an early SIMNET application is shown below:
Figure 1: DARPA's SIMNET Project
Each participant was in a SIMNET application that controlled a vehicle, such as a tank, and each simulator viewed the shared virtual battlefield. All the vehicles interacted in the same shared environment. If one simulator caused the tank it controlled to move the other participants saw that movement, in real time. This idea was advanced for the late 80's. Even networking was not universal in that era, CPUs were often computationally slow compared to those of a few years later, and graphics were primitive or non-existent. It was implemented on advanced research workstations. It wasn't until the early/mid 1990's that simulations of this sort could begin to be implemented on commercial machines and in the late 1990's commercial PCs had advanced graphics cards and enough CPU to work well at low prices.
The simulators of the era sometimes had displays that replicated a soldier’s view of the battlefield, but the host running the simulation before the SIMNET research were probably not have been networked with other hosts from multiple vendors. Each simulator worked in isolation, and an aircraft simulator done by a particular vendor couldn’t see a tank controlled by another simulator from another vendor. The idea of SIMNET-–quite advanced for the time–-was to create a virtual, shared battlefield in which all participants on multiple computers could see vehicles simulated on other hosts, and interact with each other. SIMNET’s major accomplishment was to serve as the research that allowed DIS to happen. Soon military 3D simulations could run on common office PCs.
DARPA projects were intended to transition out of the incubator research phase and into useful, real implementations. The SIMNET project worked out many of the state information exchange issues that were needed. Once that was done it needed to be standardized and refined outside of DARPA. The organization that would eventually do this was Simulation Interoperability Standards Group (SISO) that took over development of the network protocol portion of the project, which they renamed to DIS. SISO developed DIS in a series of workshops held from 1989 to 1996. Once the protocol was developed they took the relevant documents to the IEEE standards group and achieved DIS standard approval.
At the time of SIMNET the concept of a shared, networked environment was revolutionary. In today’s commercial game world entertainment like “Call of Duty” or “World of Tanks” routinely share environments between hosts. The companies that own these games make a lot of money selling such applications to the public, entertainment that draws in more revenue than movies. Some sources say that gaming currently makes $85 billion/year, the film industry makes $35 billion/year, and the music industry $15 billion/year.
As mentioned, DARPA led to another organization, SISO, and SISO developed documents that were taken to the IEEE for standards approval. These included the documents below.
Be aware that the documents may specify only some advice, and actual applications may use the standards in only a partial manner. For example IEEE-1278.1 specifies the format of dozens of Protocol Data Units (PDUs). Despite the many PDUs, the vast majority of applications use only a subset of message types, and simply ignore or drop any PDU that is not expected. An application that only deals with the movement of tanks might well ignore any electronic warfare, and the PDUs that implement it. As a result the application developers may only handle a half-dozen PDUs for the application. Very often the IEEE-1278.x documents can be regarded as interesting advice, and few or no applications regard the documents as decisive or mandatory to fully implement.
There are several documents approved in addition to the document that approves the structure of messages. An image from the IEEE document of what is trying to be accomplished is below:
Figure 2: IEEE-1278.1 Documentation
One of the things this image displays is that SISO-REF-010, discussed more later, is standardized by the SISO organization, not IEEE. SIS0-REF-010 assigns meaning to arbitrary numbers. For example sending a PDU describing a tank also describes the nationality of the owner. On the network this is done by putting a number in a field, but the numbers are completely arbitrary and every participant must agree on what the numbers mean. PDUs that contain the numeric value of 225 in the nationality field describe the USA, while North Korean assets contain the value 119 in the same field. The same process happens for describing weapons or vehicles, and the database of the items change so fast that SISO-REF-010 can't be maintained in a timely way by the precise but slowly operating IEEE standards group. Instead, SISO is used to update the document.
The IEEE-1278.1 document is the most valuable and useful standard of any of the documents. It contains exact definitions of all the PDUs, and for some sections good descriptions of how to handle the data within.
The first version was approved as DIS version 5, which was relatively early in the DIS program. It was early, but it let developers get to work. The 1278.1 document ran to 138 pages.
A better version came out a few years later in 1998. Standards workers approved a better documented version of DIS; the version was changed version 6. The document now was 213 pages long. Version 6 is backwards-compatible to version 6, meaning that version 6 software can accept messages from version 5.
A still better 1278.1 version 7 of DIS came out in 2012. The document now runs to about 750 lines, and includes good instructions for how to handle some of the data contained in fields, such as timestamps. Two new PDUs were added to handle directed energy, but DIS-7 can handle traffic from prior versions.
There are two versions of IEEE-1278.2, one dated in 1995 and one in 2015.
The 1995 IEEE-1278.2, according to the document, "...defines the communication services required to support the message exchange described in IEEE Std 1278.1-1995. In addition, IEEE Std 1278.2-1995 provides several communication profiles that meet the specified communications requirements."
The document primarily imposes a requirement for sending PDUs on the network that includes sending via TCP, sending via UDP, and a programmer-defined technique to send reliable UDP traffic. Multicast was just being adopted in 1995, but the document does mention it.
Some standards for message delivery time are mentioned. For closely coupled applications about 100ms default delivery time is specified, while 300ms is considered acceptable for loosely coupled applications.
The 2015 1278.2 document is more extensive, particularly in multicast. The changes are focused as this:
- Incorporation of rules on PDU bundling
- Addition of section on the use of Multicast for Interest Management
- Definition of Internet Protocol Version 4 (IPv4) multicast service profile
- Definition of Internet Protocol Version 6 (IPv6) multicast service profile
- Addition of rules on maximum transmission unit (MTU)
- Reorganization of the document to aid readability and create a more logical place for new content such as IPv6 and interest management
- Addition of annex providing guidance for using IP multicast addressing
These are all good subjects that keep up with more modern practices. Multicast was just beginning to be adopted and approved in the early-to-mid 1990s, and the basic software technology is available almost everywhere today.
Arguably--very, very arguably--there is some new technologies emerging related to web servers, content delivery services, and cloud computing. The economics are often compelling, and the items mentioned above may well drive things. Changing to use these features will likely change simulation networking implementations that have not yet been explored.
-
Software on web servers can be much simpler and faster to deploy or update software. It lab with a thousand hosts running a low volume simulation application, how do you update or deploy the application? If the software is on every host it takes a good deal of time and effort to accomplish. On the other hand, if the applications are web based we can roll out the new software on a single web server, and ask users to go to a URL. The application runs within the web browser. This idea is similar to the current popularity of users with a web-based email client rather than email applications deployed on the host. The economic attractiveness of client-server, web-based software designs may well have latency that's higher than local peer-to-peer network architectures, but not so bad as to avoid for some types of simulations.
-
Content delivery services involve siting identical servers at multiple sites. For example, someone going to an Amazon server is not sent to a single web server in Seattle, Washington. Instead the client is sent to a Amazon web server that is geographically close to the client. Someone on the East Coast is sent to a server on the East Coast, and someone on the West Coast is sent to a server sited on the West Coast. This results in lower latency to users. How this will apply to simulations is somewhat unknown, but is starting to emerge in the game industry. If nothing else, the content delivery servers can be used to deliver web page content more quickly.
-
Cloud computing involves running applications on virtual hosts. An ideal to to design a self-configuring operating system, install application software, and then run it at nearly unlimited sites and number of hosts at each sites. One can start off with a poorly performing, slow host, then move it to a more powerful host when the load increases. It is also possible to change networking and simulation architectures to run multiple hosts on a single problem. The cloud can measure load and as a result increase or decrease the number of hosts operating. The networking to make this happen is still being investigated.
Unaddressed for all these document statements? Off-topic? Yes. But it's not well researched yet, either. (Sorry, had to rant.)
The 1278.3 standard says "This recommended practice establishes guidelines for exercise management and feedback in Distributed Interactive Simulation (DIS) exercises. It provides recommended procedures to plan, set up, execute, manage, and assess a DIS exercise." It describes a series of operations to correctly implement a simulation application. This is described by the illustration below.
Figure 3: IEEE-1278.3 Documentation
The process here is actually quite close to what is called Distributed Simulation Engineering and Execution Process (DSEEP), also approved as IEEE-1730.
It is a quite extensive management tool for complex applications. For those managing projects, it's quite useful.
You just wrote a DIS application. How do you know it works?
The 1278.4 document says:
"This recommended practice establishes guidelines for the verification, validation, and accreditation (VV&A) of Distributed Interactive Simulation (DIS) exercises. It provides “how-to” procedures for plan- ning and conducting DIS exercise VV&A."
Just because you wrote a large, complex application doesn't mean it works. 1278.4 describes operations and procedures that can be used to assess it. Assessment can be particularly useful in simulation operations; very frequently it demands frequent and small changes to the code, and any change at all can cause unwanted surprising effects.