There was a time, in the 80s and part of the 90s, in which programmers had to anticipate and manage connectivity problems between different systems. A very hardcore era in which you also had to manage memory and other allocated resources.
But as time progressed, better connections and the Ethernet meant that data was made to flow with little difficulty for the average programmer. It seemed that the era for incomplete data and lost connections had been gone forever as were bell bottoms and gintonics, but we know that everything comes back.
In the midst of 2015 the world is full of wireless devices and mobile apps connected to the Internet, which reproduce different audio and video streamings or consults statuses in real time. Many of them are not even connected by wifi, but depend on the quality of connection provided to us by the telecoms company on duty at that time.
If on top of that today the next man can have 100 Mb fibre optic, we find web pages that can weigh real savageries. And in this scenario we find that the average developer hardly takes into account the fact that whether or not there is a live connection. Needless to say that in most cases QA does not do a better job and merely test the program or service online and offline.
Online or offline?
Obviously we can and we must take into account cases where there is no connection available and whether we can give an alternative use for the application to offer the user a partial service. For example, we can replicate certain services in a mobile application using a local database that will synchronize when we recover after the connection. But tests and corrective action should not end there.
It is important to bear in mind that connections may fail for short periods in environments of poor connectivity, for example when the phones are far from optimal coverage or the device is in motion. At that point we should control the consistency of the data received and sent. It may be that from a phone, a client’s identity and a list of orders is requested but only the identity is received, and only the list of customer orders already received were retained which we were seeing previously and already had in cache. Our service has to handle this situation and not allow the user to make decisions based on incorrect information.
Low data-transfer speed
Whether in games or auction systems and even selling tickets, high latency can ruin the user experience. Even if the problem is not on our server, the user will often comment “this application has a long lag.” It is difficult to fight this, but at least, if we know that latency is not caused by our server, we can warn the user that the application will not work well due to external causes. In the case of web pages, compress all files in a single block usually also work very well to prevent each connection adding another delay to the load.
Ultimately, the world of connectivity goes far beyond the on / off. If quality is to know our product, then we must take into account the different cases arising from a channel which is out of our control. All these casuistries should be controlled in automated tests (both integration and unitary if applicable). There are also tools to simulate such situations like the Chrome console, although the QA department would.
Written by Ferran Ferri