Amazon unveils a new, generated AI-powered Alexa, which is its popular digital assistant, calls it the world of “environmental computing” ready. AI in AI, watches, watches, rings, and even embedded in your skin.
What exactly is environmental computing?
In short, environmental computing refers to technology that you may not see, but is always around you. For example, Amazon’s newly generated AI-powered Alexa+ envisions a future where you will no longer need to ask Alexa to execute various separate commands, but like anyone, give it a long-term natural command that it can handle and respond. Over the next two years, you’ll see more of these mainstream technologies taking action – you communicate with the gadget in a seamless, human-like way.
Read also | Meet Alexa+! Amazon introduces a more personalized, autonomous and intelligent AI assistant
In other words, environmental computing will change the way we use most modern gadgets, thus getting us used to it. This change will be based on advanced home automation that exists in our environment, including robots that also manage our laundry and groceries.
How will it change the way you consume technology?
As environmental computing becomes mainstream, your interactions with technology will become smaller and more automated. For example, if you are talking to a colleague about a future meeting, any internet-enabled device you have will automatically schedule it in your calendar. It also means you will use fewer apps on your phone or computer to do most of the things you do today: order food, hail, or even send surprise gifts online. Additionally, environmental computing will ensure that your everyday technology needs are processed through a combination of smart wearable devices such as glasses, watches and rings, making you less dependent on your phone. The purpose of environmental computing is to make technology less intimidating and more natural.
Isn’t the technology here?
Early embers of environmental computing that can be seen in an always-smart speaker like Amazon’s Alexa, the playlist is listed as a music playlist based on how long you sit down and have dinner, such as smart plugins like Cisco, such as Cisco’s The Consco’s The Chinker of The Geyser, based on your distance, your proximity to your home, and mixed reality plugins like Apple Vision that replicate the large Screen Theatre Experience from Screen Theatre experience. However, they are all in the early stages of elaboration.
Read also | Music record label’s copyright struggle reflects wider challenges of open AI
How will AI contribute to environmental technology?
In contrast to today’s touch-based technology interfaces, the core operational interface of environmental AI will be enabled by large language models such as OpenAI’s new GPT-4.5, as well as natural language processing (NLP). Together, they provide machines with an understanding of human dialogue and emotions. AI will be the midstream of environmental computing, always forming the basic layer for processing information without our input. AI will also enable cross-communication between devices, ensuring that all of our accessories work as a seamless, invisible network.
How to live without a phone or app?
The goal behind environmental computing is to ensure that we don’t need many use cases for the phone. Last year’s Rabbit R1 and Deutsche Telekom’s AI phone showcase a world where AI models replace all the apps on the phone. Instead, the model will book a taxi ride based on your voice commands through a service like Uber. In fact, in the future, augmented reality smart glasses powered by voice AI models will completely replace our need to have a phone, and superimpose notifications before our eyes as per our requirements.
Read also | The rise of “Proxy AI”: Why data scientists and software developers should worry