The UK Government's Centre for the
Protection of National Infrastructure has published a key paper which attempts a wide-ranging
assessment of TCP security. TCP is, of course, one half of the ubiquitous TCP/IP, the protocol
pairing that lies at the heart of Internet communications (see
As a “first of its kind” document, the
security assessment is worth understanding in some detail.
The paper, authored by Argentinian
security researcher Fernando Gont, is a partner to an earlier
study conducted by Gont into the IP protocol.
It makes the point that approaches to
TCP security are often fragmented, leaving users at the mercy of individual vendors'
implementations. In many cases, Gont writes, security problems are known and have solutions, but
those solutions have not resulted in official IETF documents (RFCs). This limits the Internet
community's ability to check compliance of particular TCP implementations (such as the software
stacks built into operating systems and routers), since there is no reference against which a
customer could check whether a vendor has addressed particular security issues.
TCP's role is to handle the setup of
communications between specific end nodes in a computer network. The network itself uses IP (the
Internet Protocol) to address and route packets between networks; TCP handles requests from
applications to use the network – in other words, it provides the interface between the application
and the IP layer.
While hugely detailed – the document
runs to 130 pages – Gont's report still can't be considered “comprehensive”. Gont has assessed the
contents of 18 TCP-associated RFCs, which does not cover the entirety of the protocol.
For those who don't want to read the
entire document, here's an overview of some of the problems discovered by Gont.
The first aspect of the TCP assessment
is an examination of how the RFCs define the formatting and application of TCP's header fields, and
how these impact security.
- TCP's header fields include data defining the following:
- Source and destination port numbers – used to identify the communication sessions
- Sequence and acknowledgment numbers – these are used to manage the ordering of packets
sent between two hosts;
- Data offset and control bits – the control bits handle a number of TCP operations such as
congestion control, queue control, session interruption, and the end of a session;
- Window size – this field notifies the size of the data in the packet;
- Checksum – used to check the packet's integrity;
- Urgent Pointer – this may be used to advise the receiving computer that it should give
the packet priority in processing;
- Options – optional features such as timestamping and window scaling are notified in this
- Padding – a series of zeroes inserted to ensure that the header ends at a 32-bit
Gont found problems with most aspects of
the TCP header. Port numbering is a good example.
When an application initiates a
communication, the port number is important. Since different applications can communicate
simultaneously, the port number makes sure that an incoming packet is directed to the right
Port numbering is, however, an attack
vector, since if an attacker can guess the port being used by a particular application, packets can
be addressed to that port as the basis of an attack. To make it hard to guess which port a
particular application is using, the TCP implementation has to implement a randomisation
For example, while a Web server listens
on port 80 (the port to which I address my request for a Web page), the user's machine will select
a random port number. This will form a pair of source:destination ports – say, 49499 as the source
port, 80 as the destination. The Web server will address data to me, using my source
Since my computer is listening for
traffic on port 49499, it could be attacked by someone sending data to that port – if the attacker
knows I always use port 49499 for Web traffic. This is why the strength of randomisation is
important. However, a range of weaknesses reduce the effectiveness of port
More important, however, is the
predictability of destination ports. The only way that I can know which port to use to
address Web servers is if they all listen for communications on the same port. Otherwise, if
I addressed a Web server on the wrong port, it would fail to respond. Attacks on these ports (such
as SYN floods, for example), are however well known and reasonably well-defended.
Most of the attacks identified by Gont
(all the way through TCP) are denial-of-service. These do damage – but they don't involve data
leakage from your organisation. However, he also identifies significant risks from data injection
arising from issues such as port randomness.
The problem is that if an attacker can
identify the connection ID (derived indirectly from randomised port numbers), the attacker can then
send packets to the target system, and the target system will try handle those packets as
Another source of security problems Gont
identifies is that some data leakage that occurs through normal behaviour of TCP may be of value to
For example, there are various ways in
which operating systems might leak data about system uptime to the outside world via TCP's
timestamp field. Many systems reset timestamps to zero after reboot – allowing an attacker to infer
that if a system has a very long uptime, it may not yet have the most recent security patches
Another important information leakage
enabled by TCP is operating system fingerprinting. Various implementation respond to packets sent
to open ports in predictable ways, which might allow an attacker to determine whether an
Internet-accessible machine is running Windows, Linux, OSX, or some other operating system.
Clearly, the ability to identify both an operating system and infer whether or not it has the
latest patches should be considered a vulnerability – and Gont provides advice for TCP implementers
on how these might be addressed.
A number of other problems are
identified in the Gont paper; the next step will be to see whether the publication forms the basis
of concrete action in the Internet community.
An underlying question is “how did
things get this way?”
Gont identifies part of the problem
clearly in the paper; as noted earlier, while responses are developed to vulnerabilities, they are
often implemented in products rather than being published as RFCs.
The other issue, however, is more
fundamental to the Internet: it has continued to evolve as a web of trust, long after that trust
In the early Internet – long before the
World Wide Web – Internet-connected computers were known to each other, not just as IP addresses,
but on a more interpersonal level. That formed the basis of the Internet's trust-web, since the
system administrator at one university, in establishing a connection to other universities, formed
relationships with those other universities.
The Internet has far outgrown the
original models, but its protocols still embed an assumption that other computers may be trusted.
If the protocol (rather than the implementation) is still undefended, it's because that protocol is
still working under the assumption that its features and capabilities will not be
The pair of Internet protocols – TCP,
the Transmission Control Protocol, and IP, the Internet Protocol – have become so ubiquitous that
they're generally concatenated together as TCP/IP.
They have very different roles,
The Internet Protocol is designed for
the interconnection of networks. Its design purpose is not to interconnect individual
On one side, the Internet protocol takes
packets from your network, identifies the next hop in those packets' route to their destination,
and pass the packets to that next hope. And for incoming data, it accepts packets from other
networks, and passes them to your network.
When we're talking about the behaviour
of applications that communicate over the network, we're talking about the role of TCP. It's
TCP that the applications talk to, in the form of a “stack” running as a driver or process on the
end device (a PC, laptop, or any other Internet-capable device).
When an application wants to
communicate, it passes its data to TCP, and TCP begins the long chain of communications that
culminates with a connection to another computer. But it doesn't do so directly; rather, TCP
prepares its packets and hands them to the Internet Protocol layer to pass over the
TCP is not used for all
application-network communications. Because it is concerned with high reliability, TCP will delay
packets or slow down its network communications rather than drop them. For time-sensitive
applications, TCP's sibling, UDP, provides less reliability, but drops packets rather than delaying