Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Cynthia Wisehart on eGlass

I think we’ve all had quite different experiences adapting to video calls and meetings, and a personal range of comfort and discomfort.

I learned about some of my preferences in a live demo of a product called eGlass. It was the most comfortable I’ve ever been on a video call. I learned from the experience that I’m quite sensitive to micro-latency of Zoom calls not just in speech but in eye movement. I’ve always preferred either face-to-face or on the phone. For me, video calls can be highly distracting because the video feels dissonant with the audio. I can’t sync them in my brain and any dropped frames or small sync lapses disrupt me. Often I mostly ignore the facial video which I’m sure makes me seem disengaged. Alternatively when I’m on the phone or headphones I feel fully present. I remember better after audio-only or audio/screenshare meetings and I don’t experience fatigue. Facial expressions are very important to communication, but for me those cues don’t read very well with even the smallest video or sync latency. That’s just me.

I was aware of the discomfort but I didn’t really understand it until I had the eGlass demo. This product is a transparent, writeable glass with an embedded camera that sits between the speaker and their computer. The speaker talks directly into the camera and can write on the glass or call up browser images—like a whiteboard—but facing the listener! In the demo, I felt 100% engaged with the person on the other end—in this case Bayley Pierson of eGlass. His eye contact was completely natural and synced; when he gestured to write on the screen, his eyes followed and returned to me also in an entirely natural way. Pierson explained that he had a similar aha about himself—he had learned he was a lip reader, even though he could hear. It’s interesting to think about the many nuances of sensory processing that affect our personal experience of remote collaboration. We don’t merely “see” or “hear”—we are each practiced in our preferences and the composite of information we personally rely on.

Not surprisingly this observation has a scientific discipline. Cognitive Learning Theory (CLT) is about understanding how the human mind works while people learn (or communicate). The theory focuses on how information is processed by the brain and how learning occurs within that internal processing. It acknowledges that communication takes place within each of our brains and nervous systems across pathways that are native to us and well-cultivated over our lives. They are so ingrained we don’t notice them. Pierson didn’t know he read lips.

The eGlass device was initially developed for teachers based in part on CLT. eGlass allows teachers to write on the board without turning their backs. This improves SEL (social and emotional learning) by allowing students to see the teacher’s facial expressions, gestures, and gaze alongside their writing; this means extraneous cognitive load is reduced while germane load increases. eGlass can improve that balance in a corporate setting too.

I learned that it’s not enough for me to have facial expressions, gestures, and gaze if they are unnaturally disrupted. I guess that makes me high maintenance on video calls. Now that I know what bothers me I have ideas for how to cope. It’s also made me more sensitive to the cognitive needs of others I’m collaborating with. We’ve all spent a lifetime cultivating all our face-to-face sensory skills and preferences. We will need to learn and develop those same things in our virtual and hybrid collaboraton.

Featured Articles