I just read a really interesting new article in the Journal of the Association for Information Science and Technology called “Eye-tracking analysis of user behavior and performance in Web search on large and small screens” that got me thinking about several things I’d like to post here. Jaewon Kim, Paul Thomas, Ramesh Sankaranarayana, Tom Gedeon, and Hwan-Jin Yoon, all from The Australian National University in Canberra, Australia, wrote the article. It is available to ASIS&T members in “early view” via the ASIS&T Digital Library.

The researchers used eye-tracking equipment to look at how 32 people performed and interacted with Web searches on large screens, such as a laptop screen, vs. small screens, such as what you would see on a smartphone. In total, the team gathered data for 640 searches, which were either informational (“How many spikes are in the crown of the Statue of Liberty?”) or navigational (“Go to the homepage of the Canberra Cavalry baseball team.”) They also conducted post-test interviews with the participants.

You can read the article for details, but in summary, the researchers concluded that in this study, it required more work for participants to get information from the links on the small screens, and they scanned results more narrowly on the small screens. That being said, participants took similar amounts of time to click on their first link on both screen sizes, and they achieved similar amounts of success in finding the answers to the queries on both screen sizes.

Recommendations include placing a link to a phone-optimized version of a document within a full-size document, providing additional content for the top search results on phone screens, and using Page Up/Page Down buttons on the phone screen version to encourage “bigger picture” eye movement.

So, why was I so excited about this article? Many reasons.

For starters: two of my main areas of interest are usability and visual information. How photojournalism professionals interact with digital images was the subject of my dissertation, and I published an edited book on non-text information in 2012.

I read extensive amounts of literature about visual information when I was studying for my PhD – from vision science, cognitive psychology, art, you name it. One thing I took from all that reading is there are specific things that cause people to look at certain areas of whatever they’re looking at, whether it’s a photograph or a Google search results screen. In the case of a photograph, we might fixate on it because we love some aspect of it: the people in the picture, the color of a flower, whatever is important to us, as Patrick Wilson eloquently pointed out about text documents in his 1968 book Two kinds of power: An essay on bibliographical control. In the case of a Web page, we might stare at one area of the page because it’s interesting, but we might also be confused by it, as the article’s authors noted in the literature review.

I’ve seen this confusion when I’ve conducted usability studies. When people look at a page for a long time, they’re not always sure what they need to do to get the results they want. (This appears to be especially true in the case of library websites and online catalogs, because library interfaces are perpetually complex and confusing to many people.) I’ve never used eye-tracking hardware, but I’ve used other techniques such as simple observation as well as Morae software. It fascinates me to watch people use a mouse to point to where they’re looking on the screen. You can’t do that on a phone.

I’ve also thought quite a bit about screen sizes in the past. I was invited to write a JASIST review of Katy Börner’s 2010 book about scientific data visualization, Atlas of science: Visualizing what we know. The large book, which won ASIS&T’s Best Information Science Book Award in 2011, features several beautiful data visualizations, but the book is not large enough to see the visualizations well enough to comprehend the data represented in the maps. I noted in my review:

How will these spatial challenges be met as visualizations are used more frequently in the future to convey datasets, trace projections, or find and view information in search engines? It seems the disparities in the separate trends toward smaller electronic devices and the level of detail in science maps may need to be addressed before the potential of visual documentation can be fully realized.

The small and large screen size issue can be applied to almost anything we view these days. I love the convenience of my Android phone’s size, but I can’t comfortably watch a movie on it. I’ll search, read, and check social media on it when I’m feeling lazy or don’t have access to my laptop. When I’m at home and need to get real work done, I use my dual monitors that I’ve connected to my laptop. (As an example, I can’t imagine writing this post on my phone, but I’ll use it if I’m out for a walk and need to respond to a colleague’s quick question.)

Stated differences between search engine screen presentations on large and small screens that the authors presented in their paper made me want to compare them for myself, so I Googled dog parks in Seattle on my laptop:

dog parks in Seattle - laptop

 

 

 

 

 

And on my phone (first and second screens that appear, respectively):

phone screen 1 - dog parks in seattlephone 2

 

 

 

 

 

 

 

As we can see here, the laptop version features images of Seattle dog parks across the top of the screen and a map view of Seattle dog parks; standard search results appear below them. The initial phone screen focuses on map and locations, with the standard results on the second screen. These decisions make sense to me because if we’re out somewhere with our phones, we’re probably more likely to be looking for something nearby our location, whereas we could be doing almost anything on our laptops. GPS-enabled phones further promote finding things by location.

Well, I could go on forever. But I won’t. Or maybe I already have.

At the end of things, articles like this Kim et al. article are important for our field because they bridge the theoretical and the practical. The article concludes with specific recommendations for mobile interface design that a search engine or app company could start implementing immediately. I’ve believed for years that information science is an applied field. So much of what information science researchers do (or should do) has the potential to create tremendous impacts on how today’s connected world finds and receives information. Information science practitioners such as IA professionals do research every day to understand their users’ needs, but may or may not have the methodological expertise that trained researchers have. We have both IA professionals and academic researchers in our association, but we barely interact with each other.

Why don’t our research and practice worlds work together? Why don’t information scientists do a better job of communicating and applying our vital knowledge and research with the rest of the world? Do you ever go anywhere these days without seeing people who aren’t glued to their phones – and could probably use some help with finding what they’re trying to find? (I don’t.)

So concludes the inaugural content-rich post of The ASIS&T Blog. Let’s go forth and change the field… or at least talk about doing it on this blog!

Leave a Reply

Your email address will not be published. Required fields are marked *