AI; emotions as the connectors between us and the AI engines (3/3)

By George Achillias

AI; emotions as the connectors between us and the AI engines (3/3)

AI; emotions as the connectors between us and the AI engines (3/3)

One of the most important eras where personalisation is about to be redefined is fashion. There’s been a significant move and effort from companies like Amazon to understand how fashion is developed in the world,” Kavita Bala, a professor at Cornell said.

Yet Amazon appears to be pushing that algorithmic approach even further. For example, Amazon analysts situated in Israel created machine learning that, by breaking down only a couple of labels connected to pictures, can find whether a specific look can be viewed as stylish. This software could possibly give fashion feedback or proposals to modifications. The work is creative in light of the fact that computers more often than not require extensive labelling to adopt from visual data.

But in many real-world situations, such as an image posted to Instagram, there might be just one label. Hence, two things are pivotal in order to be able to start building and adjusting on consumers’ taste and expectations. The first is to understand humans’ brain in depth and the second one, is to make sure that we can deliver in real time, experiences tailor made for each consumer and user, based on the understanding we develop of the context.

But to understand humans’ brain, we might have a long way to go. Professor Krishna Shenoy likens our comprehension of the cerebrum to mankind’s grip on the world map in the mid 1500s.

Another professor, Jeff Lichtman, is much more raw on his remarks when it comes to portray the level we have come to in regards to the brain mapping. He tends to begin off his courses by asking his students the inquiry, “If everything you need to know about the brain is a mile, how far have we walked in this mile?” He says students give answers like 75% of a mile, a large portion of a mile, a fourth of a mile, and so on. — yet that he trusts the genuine answer is “around three inches.”

Therefore while we let and fund researchers to explore and map the humans’ brain, we, companies and strategists, work towards the direction to understand in depth each personalised dynamic context and expectations we are having from each product or service. One key element of this effort is speed, plus the ability to make sure that the personalised serviced is as accurate as possible. In order to do that brands need people to participate and interact more than ever with them.

Nike has recently (September 2017) opened a studio in New York where people are able to create a hyper-personalised product and take it with a very unique one off product.

“The intention of the project is to bring to life the collaborative design experience that we offer our athletes,” Mark Smith, VP of Innovation Special Projects said. “They love products that tell their story, so we wanted to combine that idea with a new process of live design and manufacturing that allows our guests to come into the space, work collaboratively with us and leave with a special product in less time than ever before.”

Till now, nike program was designed to give the ability to consumers to design their own shoes and have them in their mail box within couple weeks. Now, the whole value chain is redesigned and people can actually walk out from store having with them their very personal pair of sneakers. In combination, what we have said above on what Amazon tries to achieve, it becomes obvious that we get fast to a point where products are just not made to address needs or wishes but to fulfil expectations, thoughts, and tailor-made requirements.

 

In order to do that, we need tools allow not only to design the service but also to experience proactively how it looks, how it feels, what kind of outcomes it has, how it is about to change our daily life. And there is where AI and AR is coming. In our times, technology is developing faster than ever.

We have achieved the point where we practically underestimate technology. What’s more, we utilize innovation in a way not exclusively to make esteem, to face and battle many-sided quality yet in addition to address new difficulties and requests of a quickly digitalized world. Innovation and tools which are made to trigger improvement in all parts of our every day lives and work. Mastering the usage of technology and its most recent parts and components we as clients can feel and touch the future in a considerably more adaptable path than the previous times. Having said that, we begin to comprehend that venturing into the AR world, resembles venturing in our current reality where there is dependably a chance for not only one change and not only in one layer.

Mixing reality with objects can allow us not exclusively to perceive how things, spaces, ideas may be when completely implemented yet in addition to make ultra-hypersonalised expressions or variants of spaces, products, services, tangible or not. Utilizing items like HoloLens goggles, the spaces get a totally unique dynamic, our comprehension of the context gets wealthier than at any other time and our cooperation with brands or services can get more consistent than before. In combination with the mind that it accompanies the AI engines, the dialogue amongst us and engines, people and nature, users and services gets to totally new unknown yet truly entrancing level. Yet, we begin making hyper-personalised universes in view of the data available, the new more capable algorithms and the better approaches to work with information and become capable of blending reality with a data based and encounter driven point of view.

It came out recently (September 2017) that Google works on an entire new arrangement of algorithms for the Google street view service capable not exclusively to give us a superior perspective of regions yet to allow to begin having logic, refined humanised discussions with the google engine. Also, this where the test lies for Google and the various algorithm creators. To make administrations and engines not just competent to answer profound and entangled inquiries originating from people yet to begin understanding expectations. Jen Fitzpatrick, the Google VP who heads the organization’s maps division depicts that individuals do expect engines keep running by Google to be capable to answer supreme humanised inquiries as well as to get past that point and be a superior and more exact conversational engine. In order to do that, a totally new approach is required. Engines and algorithms capable to answer people questions get ravenous for information as well as for pictures. To enable engines to show signs of improved visual comprehension of our universe, they need access to pictures.

One and half year ago, google photos as a service came out. One of the key attractive points was that people were invited to upload all their pictures and store them there for free in order not only to protect them from loss but also to start categorizing them in a completely fresh way. People, myself included, started uploading their pictures on the google-photos service. It was so nice to see getting back albums, collages, enhanced and edited pictures like never before and of course to feel that I can logically look through them in order to find pictures or moments from the past.

At the same time, by doing this we just full AI picture pools with rich-content data. Modern digitally pictures carry with them also location data among other details. Suddenly each area got started being picturised from any possible angle giving AI engines and machine learning algorithms in Google rich in data sets for the maps service. Of course this is the one aspect of story.

The other side, says that the better understanding and charting we have of a place, the more interactive and alive this place becomes when it gets to the search function. And based on this, what we get back as an answer to each question we make to Google it does not returns just a generic but specific to the question answer but something deeply personalised aiming to drive the conversation and lead it where it is best conclusion for us and the context we are within. Having said, it is quite interesting to observe that hyper-personalisation is not only how we enjoy services or products but how the world is about to get personalised for each of us.

All of a sudden, we understand that everything around us won’t be a good judgment or something that applies to everyone, except the outcome of a way more entangled significantly more profound and advanced work of the interaction of the AI engines with our cyberselves.

Getting back to what Amazon tries to accomplish by applying algorithms on what stylish or not is and furthermore to employ engines equipped for designing engines we get to a point where the computerised understanding and consolidated learning about people’s desires and expectations on how the world and different people look within it can be also personalised.

We get to a point that we will be able not only to choose but unconsciously will experience colours, clothes, walls, environments in a unique way based on what match our expectations and requirements.

As we are heading to an era, within the next 18–30 months, where autonomous cars will start rolling on streets on massive scale, a picture and an experience driven approach seems the only way to go.

Pictures and data originating from the advanced Google Street view new arrangement of algorithms joined with every one of the photos people upload for each place could help self-driven cars to comprehend the world more precisely. What’s more, being traveler in one of those cars, a smart car able to feel your enthusiasm will be an entirely different affair than it is in our days to commute around either driving on just being a passenger. At present, even the smarter cars are pretty dumb even contrasted with different machines or advanced digital assistants on the planet.

Cars in our days are not able to perceive contrasts among individuals. They are made in an approach to do the very same thing, be driven from A to B, regardless of who the driver is, whether we like the way this is conveyed or not. Furthermore, cars are that imbecilic that can navigate you from indicate A point B picking streets that look bad to drivers with local knowledge on the roads. These days cars are basically stunning dumb engineering accomplishments that they offer zero personalisation. They get orders and they respond to them in a similar way no matter if a teenager is behind the wheel or his grandmother. The work engineers and automakers are doing in that area guarantee to change that. Cars will be really soon able to recognise the driver, sense his emotional condition and reset its systems to meet driver’s preference and expectations.

As cars are and will be, we have to achieve the point that they have the knowledge not exclusively to comprehend who the driver is, his driving style, his inclinations on the off chance that he decided to not drive, but rather additionally to feel his enthusiastic condition and adjust proactively to any triggers from the specific circumstance and solicitations from the driver on the fly. In 2015, a noteworthy car manufacturer reported that it will put more than $1 billion in AI throughout the following five years to enhance car safety. The business case: Better safety can be used as a major selling point for its cars, especially in very competitive markets like those in North America.

 

Regardless of whether AI and machine learning can turn into a differentiating capability for car makers, will depend on just how far it can push the technology, and whether the advantages and features, will rapidly end up as standard equipment in mass market cars, built into the list price. In light of this, what Tesla does with the recently announced and got in production Model 3 is this. To give to a massive audience the chance to get access to beyond the curve technology and innovation able, not exclusively to limit the intricacy of owning another electric car but start educating more and more people to have a conversation and exchange of information with the Tesla cars and furthermore encounter a totally better approach for what “owning or use a car” implies.

As we all know, Tesla cars are hyper-complex software and quite simple hardware on wheels. And that because Tesla’s cars are fare centered around giving the best more hyperconnected experience to its driver, and soon the safest most advanced self-driven ride. Along these lines the pace we begin getting technologies hyperconnected with AI engines in our car characterizes the pace where we begin encounter hypersonalised encounters with regards to how we stream and drive inside our reality.

We are reaching a point where administrations and products give their position to encounters and tailor-made universes. Furthermore, this is where things begin get considerably more intriguing. Since regardless of how incredible an online experience is, individuals still expect great physical encounters as well.

What’s more, the association between those two is the passionate condition people experience each time they accomplish something, they act, they behave, they proceed with something, or they simply encounter the world. Subsequently to have a precise tailor-made hyper-personalised world for every person, we require machines to have the capacity to comprehend emotions.

Machines are being given the capacity to detect and perceive articulations and expressions of human feeling, for example, interest, distress, and pleasure, with the recognition that such communication is vital for helping machines choose more helpful and less aggravating behaviour when it comes to form the world we experience. In any case, this is insufficient.

One of the real boundaries in the mankind is that we don’t speak all the same language. There are more 6909languages in the world listed in our days. In order to desert the impediment we require another approach to work around the language barrier. As a result his stops by applying neural networks to to conversations happening online as Facebook does. Neural systems help engines to decipher words and expressions as well as to mull over the unique situation while having the capacity to perceive pictures and allocate expressions and dynamic contexts to images and vice versa. Google in the meantime follows an alternate approach when it comes to neural systems planning to decipher continuously. Google’s approach was to create neural networks concentrating on gathering expressions of comparable ideas. The way it is assembled is on working in particular ways so it can comprehend sentiments, and possibly after the correct preparing of the engines, even feelings.

Having that set up, machines begin feeling and getting ready to participate in human-machines conversations more precisely than any time in recent memory. And this is the basic moment that we begin having machines as process or transactions enablers but as our advisors.