Limited Bandwidth - There is a limit to the rate at which hosts on the network can send data to one another. If a computer is connected to the network with a 56 kbps modem, this might be 5 Kbytes per second, while computers on a local area network might be able to communicate at 128 megabytes per second. For service providers, additional bandwidth capacity can be costly, so even if there is no physical bandwidth limitation, bandwidth conservation is important for many projects.
Packet Loss - Computer networks are inherently unreliable. Information transmitted over a network may become corrupted in transit, or may be dropped at a router where traffic has become congested. Even when (especially when) using a guaranteed message delivery protocol such as TCP, the unreliable nature of the underlying network still must be taken into account for network applications.
Latency - Messages sent from one host to another on the network take time to arrive at the destination. The time can be influenced by many factors, including the medium over which the messages travel, how many intermediate hosts must route the message, an the level of traffic congestion at each of those network nodes. Latency becomes particularly problematic in network simulations that attempt to present a real-time interface to the client, when the latency of the connection may be perceptible in time.
See Torque Network Library Design Fundamentals for information on how TNL deals with the fundamental limitations of computer networks.
IP - Internet Protocol: The Internet Protocol is the basic building block for internet communications. IP is a routing protocol, which means that it is used to route information packets from a source host to a destination host, specified by an IP address. IP packets are not guaranteed to arrive at the destination specified by the sender, and those packets that do arrive are not guaranteed to arrive in the order they were sent. IP packet payloads may also be corrupted when they are delivered. IP is not useful as an application protocol - it is used mainly as a foundation for the higher level TCP and UDP protocols.
UDP - User Datagram Protocol: The User Datagram Protocol supplies a thin layer on top of IP that performs error detection and application level routing on a single host. UDP packets are addressed using both an IP address to specify the physical host, and a port number, to specify which process on the machine the packet should be delivered to. UDP packets also contain a checksum, so that corrupted packets can be discarded. UDP packets that are corrupted or dropped by an intermediate host are not retransmitted by the sender, because the sender is never notified whether a given packet was delivered or not.
TCP/IP - Transmission Control Protocol: TCP was designed to make internet programming easier by building a reliable, connection-based protocol on top of the unreliable IP. TCP does this by sending acknowledgements when data packets arrive, and resending data that was dropped. TCP is a stream protocol, so the network connection can be treated like any other file stream in the system. TCP is not suitable for simulation data, because any dropped packets will stall the data pipeline until the dropped data can be retransmitted.
Torque Network Library Design Fundamentals contains a discussion of why this is not optimal for bandwidth conservation, and an explanation of the protocol solution implemented in TNL.
Sockets created to use the TCP stream protocol can either be set to "listen()" for connections from remote hosts, or can be set to "connect()" to a remote host that is currently listening for incoming connections. Once a connection is established between two stream sockets, either side can send and receive guaranteed data to the other.
Sockets can also be created in datagram mode, in which case they will use the underlying UDP protocol for transmission of datagram packets. Since the datagram socket mode is connectionless, the destination IP address and port must be specified with each data packet sent.
The socket API contains other utility routines for performing operations such as host name resolution and socket options configuration. The Microsoft Windows platform supplies a Windows socket API called Winsock that implements the BSD socket API.
Peer-to-Peer: In a peer-to-peer network application, the client processes involved in the network communicate directly with one another. Though there may be one or more hosts with authoritative control over the network of peers, peer-to-peer applications largely distribute the responsibility for the simulation or application amongst the peers.
Client-Server: In the client-server model, one host on the network, the server, acts as a central communications hub for all the other hosts (the clients). Typically the server is authoritative, and is responsible for routing communication between the several clients.
Client-Tiered Server Cluster: In network applications where more clients want to subscribe to a service than can be accomodated by one server, the server's role will be handled by a cluster of servers peered together. The servers in the network communicate using the peer-to-peer model, and communicate with the clients using the client-server model. The servers may also communicate with an authoritative "super-server" to handle logins or resolve conflicts.
TNL imposes no specific network topology on applications using it.
There are many Symmetric Encryption algorithms of varying levels of security, but all of them when used alone have several drawbacks. First, both parties to the communication must have a copy of the shared secret key. If a client is attempting to communicate with a service it has never contacted before, and doesn't share a key with, it won't be able to communicate securely with it. Also, although an intermediate eavesdropper cannot read the data sent between the hosts, it can alter the messages, potentially causing one or more of the systems to fail.
When a message is sent using a symmetric cipher, the hash or some portion of it is encrypted and sent along as well. Any changes in the message will cause the hash algorithm to compute a different hash than that which is encrypted and included with the message, notifying the receiving party that the message has been tampered with.
Key exchange algorithms (Diffie-Helman, ECDH) have certain properties such that two users, A and B can share their public keys with each other in plain text, and then, using each other's public key and their own private keys they can generate the same shared secret key. Eavesdroppers can see the public keys, but, because the private keys are never transmitted, cannot know the shared secret. Because public key algorithms are computationally much more expensive than symmetric cryptography, network applications generally use public key cryptography to share a secret key that is then used as a symmetric cipher key.
NetInterface, the TNL class that implements the secure key exchange protocol.
To combat this attack, the concept of certificates was introduced. In this model, the public key of one or both of the participants in the communication is digitally signed with the private key of some known, trusted Certificate Authority (CA). Then, using the Certificate Authority's public key, the parties to the communication can validate the public key of the opposite parties.
Traffic flooding: This attack sends a barrage of data to the public address of the server, in an attempt to overwhelm that server's connection to the internet. This attack often employs many machines, hijacked by the attacker and acting in concert. These attacks are often the most difficult to mount since they require a large number of available machines with high-speed internet access in order to attack a remote host effectively.
Connection depletion: The connection depletion attack works by exploting a weakness of some connection based communications protocols. Connection depletion attacks work by initiating a constant flow of spurious connection attempts. When a legitimate user attempts to connect to the server, all available pending connection slots are already taken up by the spoofed connection attempts, thereby denying service to valid clients.
The TNL uses a two-phase connection protocol to protect against connection depletion attacks, and implements a client-puzzle algorithm to prevent CPU depletion attacks. Also, TNL is built so that bogus packets are discarded as quickly as possible, preventing flooding attacks from impacting the server CPU.
In client/server applications, firewalls and NATs aren't generally a problem. When a client behind a firewall or NAT makes a request to the server, the response is allowed to pass through the firewall or NAT because the network traffic was solicited first by the client.
In peer-to-peer applications, NATs and firewalls pose a greater challenge. if both peers are behind different firewalls, then getting them to communicate to each other requires that they both initiate the connection in order for the other's data to be able to flow through the firewall or NAT. This can be facilitated by a third party server or client that both are able to communicate with directly.
The TNL provides functionality for arranging a direct connection between two firewalled hosts via a third party master server.