The end rule of the Graph Workflow model suggests that successfully crafted data graphs can aid description of the data, or can be used to explore data in order to extract signals by removing noise, or can provide important inferential insights for prediction and diagnosis.
The decoding of visual information from data graphs can lead to the (i) description by means of reporting statistical summaries, (ii) exploration of data to extract signals from data by sieving out the noise, (iii) inferential analysis and prediction using data mining methods.
To succeed in visual decoding, one must iterate several times through the Graph Workflow process, and even scraping the entire output and starting fresh with a new idea. Because the same data can be graphically encoded in myriad ways, the competing visuals must be ranked according to the qualities of data graphs.
Indeed, conformance to qualities of data graphs can only be achieved through such an iterative approach. In fact, a data graph can be thought of as an iterative experimental estimator, whereby each iteration reduces estimation error in decoding.
To use the words of Chambers, Cleveland, Kleiner and Tukey (1983, p.316):
“Effective data analysis is iterative. We cannot expect the first plot we make, not any single plot, to be the ‘right’ plot for the data; we must carefully examine and reflect on each plot that we make, letting it influence our course of action and the plot that we make next“.
In my work, as a rule of thumb, I expect to run no less than 100 iterations prior settling to a reasonably informative graph. In the first iterations, I try out different forms of data graphs, often experimenting with new ideas. At the experimentation stage, the most important steps are data management and exploratory data analysis. Once I settle with what I believe to be appropriate visual implantations and retinal variables, then it takes me many more iterations to settle with the right amount of graph identification, and the use of graph enhancement tools.
Visual perception is our ability to understand information from visible light. Visual processing is dominant over other types of sensory processing but is innately limited.
It is imperative to have knowledge of fundamental laws that underlie our visual system and human cognition in order to guide the construction of effective graphs that would suit our bounded visual capabilities.
Our visual reality is based on a perception that is updated in light of new information. Visual processing can be thought of as a Bayesian, but with bounded learning. Our retina gathers complex data and we learn by simplifying visual stimuli into objects and patterns that are meaningful given our prior understanding and the context in which the new data is delivered. We do not see with our eyes – we perceive, hypothesize and interpret with our brain.
In the words of William Cleveland (1994), one of the most prominent statisticians at the forefront of VDA:
“When a graph is made, quantitative and categorical information is encoded by a display method. Then the information is visually decoded. This visual perception is a vital link. No matter how clever the choice of the information, and no matter how technologically impressive the encoding, a visualization fails if the decoding fails. Some display methods lead to efficient, accurate decoding, and others lead to inefficient, inaccurate decoding. It is only through scientific study of visual perception that informed judgments can be made about display methods.” (p.1)