Test Methodology
At the core of our product design we embrace fundamental protocols and industry best-practices to achieve a comprehensive testing & monitoring strategy while maintaining an absolute do-no-harm presence on your network.
Home Network Monitoring
On your home network we use standard ARP to query for what devices are present and attached to the local LAN segment. All our historical tracking and alerting is based on this simple mechanism. It is important to point out that we DO NOT do any form of ARP spoofing or ARP cache poisoning and DO NOT attempt to interfere with your normal DHCP operation in any way. Also, we NEVER alter your routing, try to insert ourselves in the data path or change anything on your router. The bottom line is that we are well aware of the many connectivity issues and troubleshooting headaches associated with devices that try to take control of your network and coerce your normal traffic flows.
Internet Access Testing
For connectivity (control-plane) testing we utilize TCP over IPv6 to establish a connection into our cloud. If IPv6 is not available on the network, we simply use IPv4. The primary purpose of this connection, in addition to some of our own basic control and status, is to validate that TCP sessions can be reliably established (outbound) through the user’s firewall. We consider this critical because creating this type of connection, with all its associated protocol handshaking, verifies some of the most fundamental (and important) functions expected of the firewall and gateway router.
In contrast, for (data-plane) performance and reliability verification we use UDP and not TCP when testing to our various cloud servers. To maximize end-to-end compatibility, as well as to compliment our TCP connectivity testing with IPv6, we do these measurements using IPv4. The reason for using UDP instead of TCP is that with all its inherent resiliency and retransmission capabilities, TCP hides the underlying cause of any performance degradation that occurs. When a low download or upload speed is observed, the next question is why. Is it due to packet loss, is the path delay abnormally high, is there congestion-induced delay variation, is packet reordering present, maybe data corruption, or is it a combination of these factors? This is why we feel that, although they certainly have their place, traditional TCP-based internet speed tests lack the specificity required for meaningful network monitoring. What can seem like a simple need for “more bandwidth” is often due to a very errored connection. And many applications are significantly more sensitive to errors than the relatively long-duration TCP sessions used by typical speed tests.
As for monitor-io, we use very specific UDP datagram exchanges so we can accurately monitor the packet-by-packet behavior of the connectivity throughout testing. This allows us to detect, analyze and quantify these types of impairments when they occur. And because this testing is so lightweight (approx. 14 Kbps), it can run continuously 24x7. This is a considerable difference when compared to traditional speed tests that need to saturate the network connection in order to achieve a steady-state maximum throughput. It is also worth mentioning that our underlying measurement techniques are based on well-established IETF IPPM standards covering areas that include test initiation, sending disciplines, loss characterization, time stamping, sequence number processing, delay variation and packet reordering.
A very significant aspect of our methodology is the concurrent use of multiple geographic regions to source and sink test traffic. The software on our device interacts with numerous cloud servers around the globe to provide very diverse network coverage. This allows us to much more conclusively differentiate between a network issue occurring somewhere in the internet vs. your specific internet service being down because you can’t reach anything. And this is an important distinction because to most people, these look and feel the same.
The use of diverse geographic regions also provides the ability to make predictions about the relative proximity of internet issues to the end user...what we call a “Distance” score (1-10). We achieve this through differential performance analysis between regions. After test traffic from our device enters an ISP, it quickly starts to diverge after only a few hops as it is routed over the “best” paths toward our different destinations. We then compare the frequency, density and severity of any impairment differences that occur.
Another thing to call out regarding our regionalized approach to testing is that it should not be confused with devices or software that utilize ICMP pings or HTTP GETs to confirm reachability to popular internet websites. This is because, although they may seem “diverse”, the extremely efficient distribution of directly connected content and hosting in the modern internet means that there is often little or no difference in the actual network resources being traversed (i.e., the routers, switches and links in the path). Also, when considered with today’s highly optimized caching, load balancing and virtualized infrastructures, website pings and GETs end up being a very poor choice for any performance impairment analysis intended to be relevant to the end user and their access connection.
Our Device
The monitor-io device is connected behind a user’s firewall alongside all their other computers, game consoles and IoT hardware. It obtains its IP address and LAN configuration from the exact same DHCP source. Its connectivity is via a wired 10/100 Ethernet port because we require very little bandwidth and Wi-Fi would be too unpredictable for precise measurements. We have specialized software that runs on the device and exchanges test traffic with our regional servers. And while the device obviously participates in testing, the actual measurement processing and data analysis is all handled in the cloud.
The hardware itself is based on an off-the-shelf single board computer that runs Debian Linux. The kernel has been slightly optimized for our high-precision measurements and general user access has been removed (other than a local web interface). The LCD display provides convenient across-the-room status indicating when your internet connection is clean, taking errors or hard down - including accumulated online, offline and outage time.
At the core of our product design we embrace fundamental protocols and industry best-practices to achieve a comprehensive testing & monitoring strategy while maintaining an absolute do-no-harm presence on your network.
Home Network Monitoring
On your home network we use standard ARP to query for what devices are present and attached to the local LAN segment. All our historical tracking and alerting is based on this simple mechanism. It is important to point out that we DO NOT do any form of ARP spoofing or ARP cache poisoning and DO NOT attempt to interfere with your normal DHCP operation in any way. Also, we NEVER alter your routing, try to insert ourselves in the data path or change anything on your router. The bottom line is that we are well aware of the many connectivity issues and troubleshooting headaches associated with devices that try to take control of your network and coerce your normal traffic flows.
Internet Access Testing
For connectivity (control-plane) testing we utilize TCP over IPv6 to establish a connection into our cloud. If IPv6 is not available on the network, we simply use IPv4. The primary purpose of this connection, in addition to some of our own basic control and status, is to validate that TCP sessions can be reliably established (outbound) through the user’s firewall. We consider this critical because creating this type of connection, with all its associated protocol handshaking, verifies some of the most fundamental (and important) functions expected of the firewall and gateway router.
In contrast, for (data-plane) performance and reliability verification we use UDP and not TCP when testing to our various cloud servers. To maximize end-to-end compatibility, as well as to compliment our TCP connectivity testing with IPv6, we do these measurements using IPv4. The reason for using UDP instead of TCP is that with all its inherent resiliency and retransmission capabilities, TCP hides the underlying cause of any performance degradation that occurs. When a low download or upload speed is observed, the next question is why. Is it due to packet loss, is the path delay abnormally high, is there congestion-induced delay variation, is packet reordering present, maybe data corruption, or is it a combination of these factors? This is why we feel that, although they certainly have their place, traditional TCP-based internet speed tests lack the specificity required for meaningful network monitoring. What can seem like a simple need for “more bandwidth” is often due to a very errored connection. And many applications are significantly more sensitive to errors than the relatively long-duration TCP sessions used by typical speed tests.
As for monitor-io, we use very specific UDP datagram exchanges so we can accurately monitor the packet-by-packet behavior of the connectivity throughout testing. This allows us to detect, analyze and quantify these types of impairments when they occur. And because this testing is so lightweight (approx. 14 Kbps), it can run continuously 24x7. This is a considerable difference when compared to traditional speed tests that need to saturate the network connection in order to achieve a steady-state maximum throughput. It is also worth mentioning that our underlying measurement techniques are based on well-established IETF IPPM standards covering areas that include test initiation, sending disciplines, loss characterization, time stamping, sequence number processing, delay variation and packet reordering.
A very significant aspect of our methodology is the concurrent use of multiple geographic regions to source and sink test traffic. The software on our device interacts with numerous cloud servers around the globe to provide very diverse network coverage. This allows us to much more conclusively differentiate between a network issue occurring somewhere in the internet vs. your specific internet service being down because you can’t reach anything. And this is an important distinction because to most people, these look and feel the same.
The use of diverse geographic regions also provides the ability to make predictions about the relative proximity of internet issues to the end user...what we call a “Distance” score (1-10). We achieve this through differential performance analysis between regions. After test traffic from our device enters an ISP, it quickly starts to diverge after only a few hops as it is routed over the “best” paths toward our different destinations. We then compare the frequency, density and severity of any impairment differences that occur.
Another thing to call out regarding our regionalized approach to testing is that it should not be confused with devices or software that utilize ICMP pings or HTTP GETs to confirm reachability to popular internet websites. This is because, although they may seem “diverse”, the extremely efficient distribution of directly connected content and hosting in the modern internet means that there is often little or no difference in the actual network resources being traversed (i.e., the routers, switches and links in the path). Also, when considered with today’s highly optimized caching, load balancing and virtualized infrastructures, website pings and GETs end up being a very poor choice for any performance impairment analysis intended to be relevant to the end user and their access connection.
Our Device
The monitor-io device is connected behind a user’s firewall alongside all their other computers, game consoles and IoT hardware. It obtains its IP address and LAN configuration from the exact same DHCP source. Its connectivity is via a wired 10/100 Ethernet port because we require very little bandwidth and Wi-Fi would be too unpredictable for precise measurements. We have specialized software that runs on the device and exchanges test traffic with our regional servers. And while the device obviously participates in testing, the actual measurement processing and data analysis is all handled in the cloud.
The hardware itself is based on an off-the-shelf single board computer that runs Debian Linux. The kernel has been slightly optimized for our high-precision measurements and general user access has been removed (other than a local web interface). The LCD display provides convenient across-the-room status indicating when your internet connection is clean, taking errors or hard down - including accumulated online, offline and outage time.