When we consider what the future holds, the possibilities are endless. From curing diseases to inhabiting other planets; it’s limitless. One thing in common is how technology plays a role in everything.
As a follow up from our September Tuesday Perspectives blog post on user interface design for mobile apps where we discussed some basic principles; this blog post focuses on what the future holds for user interface design and development.
We’ve had a glimpse of what the future for user interface can look like in Hollywood movies such as the Ironman and Star Trek series. Seems a bit sci-fi and way out there, but these innovations in user interface design and development are already here or will soon become reality.
Graphical User Interfaces:
Unlike natural and regular text-based informative interfaces, graphical interfaces are increasingly being utilized as it reduces the content base and increases the targeted message very effectively through the use of graphic icons. Being able to interact with a device visually has shown to be invaluable as it invokes action and response better than text based interfaces.
Organic User interfaces:
There will still be a strong investment in graphical and natural interfaces, but the new technology of organic user interface can’t be ignored. This is where a non-flat display is involved and users control an object by manipulating an actual physical shape. It’s called organic because of the natural way it works. Again taking a cue from futuristic Hollywood movies, displays and interaction of multi-dimensional interfaces will slowly become reality. Although this technology will take time to evolve, the impact on the user experience will be vast.
Gesture recognition is pretty self-explanatory. It is a sensor based technology that is triggered by touch or movement from a finger, hand, or other parts of the body that act as inputs to perform a computing task. Voice can also act as a gesture. The addition of the z-axis to our existing two-dimensional user interface will open up and undoubtedly improve the human-computer interaction experience. Just imagine how many more functions can be mapped to our body movements. Design challenges in two dimensions can be solved with the addition of a third dimension that provides more flexibility and intuitiveness.
Imagine being able to twist, fold, or bend a display in order to interact or respond to the computing system. The advancement of this type of display is already here and changes how designers and developers think about user experience.
Augmented Reality (AR):
This technology is a booming industry. Although similar to Virtual Reality (VR) where our perceptions are altered, Augmented Reality (AR) doesn’t alter our presence or completely immerse the user somewhere else like VR. Augmented Reality is a direct or indirect physical real world look alike environment where all real-world elements are augmented with inputs of something else like multimedia elements of sound and graphic effects. Giants like Samsung, Microsoft, and Apple are investing heavily in this technology. When Microsoft launched its HoloLens in 2015, it set the stage for how remarkable user interface and experiences will be in the future.
Voice User Interface:
Although voice user interface has existed for decades, this technology is yet to meet the revolutionary kind of success as expected. Although device personal assistants like Siri for iOS and Cortana for android have made life a bit easier, or some can claim more frustrating with the lack of result accuracy, the Google Home and Amazon Echo have brought voice user interface into more mainstream use in the home. Voice user interface has exponential possibilities to change the very nature of our every day. Time will test how aggressive the innovation of this technology will be.
Tangible User Interface (TUI):
This is a complete sensor based interface where it recognizes the touch on its surface. Basically it is a direct link between how you control the physical manipulations via touch and how the system responds to the touch triggers. The manipulations can be explained in how a user grasps or moves an object. This is extensively used and was introduced in the Microsoft PixelSense (formerly known as Microsoft Surface).
We are already witnessing the next generation of user interface technology with the launch of the new iPhone X and its facial recognition technology as an example, and the many new augmented reality apps already being built for it. The more complex the scenarios, the greater challenge it is for designers and developers. We will have to think beyond features and functionality and the mere technical aspects, but consider a user’s interactions, environments, activity, personas and more.