Two patents were recently granted to Google that paint a very interesting picture of the future of search. These patents are:
Before we dive in, I feel it’s necessary to point out that having a patent granted does not mean that Google will be implementing everything (or even anything) contained therein; however, it does illustrate areas they’re considering and willing to invest significant time and money into protecting.
That said, these two patents contain ideas and technologies that strongly reflect the direction Google is currently heading in, and they point to a stronger monetization strategy on mobile and voice-first devices.
In this two-part series, I will take you through some of the key points of each patent. Then, I’ll talk about how the information from these to patents combines to paint a picture of a future where search and task completion is very different than it is today.
In this article, I’m going to focus on the first patent, “Detecting and correcting potential errors in user behavior.” To get a feel for the full scope of what Google is getting at here, I’ve highlighted excerpts from key sections of the patent, followed by my assessment of those sections.
In the abstract, we understand that Google intends to take into account what we are presently doing and place that in the context of tasks we are likely to do in the future. If a current action is likely to interfere with an expected future task, then the user is notified via their device. The abstract doesn’t include what will happen when that event occurs, so we’ll just have to keep reading.
The background of a patent basically outlines its “why,” or what problem it is attempting to solve. In the background here, we see Google acknowledge that, while today’s devices can alert users to upcoming events they may need to attend or recommend products based on past purchase history (I wonder why that’s specifically mentioned, don’t you?), the current technologies aren’t proactive in correcting users when their actions are not compatible with these events or needs.
While there are four parts to the summary it is in Section 2 that we get the picture of what Google is accomplishing with this patent. Building on the background that the system they are describing will detect when actions being taken are likely to prevent future actions the user is expected to take, here we see that Google will send to the user an indication that their current actions will prevent future ones.
This might not seem altogether interesting; however, as we dive further in, we’ll start to understand the full reach, influence and power that they are talking about.
You’re going to see some numeric references below, such as “device 110.” These numbers relate to the figures included with the patent. While most aren’t specifically relevant to understanding what is being discussed (in our context), I don’t want you feeling like you’re missing anything, so I’ll include the figures above where they’re first referenced. Let’s begin with figure 1.
Now let’s get back to what the patent covers …
The sections between 2 and 25, while interesting, reinforced the points note previously. In section 25, however, we get a bit of interesting information. Google mentions the use of communication systems built into the device, as well as social network account data and other third-party applications and services on the devices accessed by the user. Basically, the patent is built on the idea that all data from virtually any source can be used to determine expected actions a user is likely to take.
In discussing understanding the context of actions, Google adds in section 31 the idea of taking into account the motion of the user’s device and external factors such as weather and purchase histories (there it is again). Once more, we see wide varieties of contextual data sources used, but here we also see the inclusion of voice conversation and voice mail.
Privacy concerns aside, that’s a lot of data — especially when we consider that “voice conversations” doesn’t necessarily mean “on the phone” simply because it’s listed right after voice mail.
In section 34, we’re seeing an expansion of the ideas presented previously into the unscheduled world. Until this point in the patent, we’ve generally read Google determining future events using data gathered from various sources (social media, text, voice, etc.). Here, however, we’re seeing this expanded to the system gaining an understanding of patterns based on past and present behavior — and establishing from that what the user is expected to do.
In Section 36, we see the bridge of user behavior established by “watching” our patterns from Section 34 and combining that with the event and calendar information mined from other sources to override habitual patterns with known upcoming events that may disrupt our daily activity.
In Section 39, we simply see the output: Google notifying the user that an action they are taking now may impact the ability to perform a future action.
In Section 40, the patent discusses the use of machine learning and modeling to determine and predict possible actions being performed. In other sections of the patent, we read such examples as using the motion and location of a device to determine the user is standing in line at an airport (and what the context of that action would predict).
In Section 44, we see a very helpful example: the system detecting that the user has overslept when they have a flight and thus alerting them to this fact before it’s too late to catch it, given current conditions (time to airport, time to get through the gates, etc.).
Here, we see something fairly straightforward but that needs to be illustrated as we approach Section 54. The system is designed to actively provide information that will allow a user to avoid making a mistake, even if they are unaware of said mistake to begin with. A very pleasant idea, to be sure, but here’s where it gets dangerous…
It’s the first line of this section that might be considered dangerous. The patent suggests that the system allows for the “correcting of behavior” without the need for the end user to even be aware that they are being corrected. Let that sink in for a second: you do not need to be aware that a system controlled by Google is adjusting your actions without your knowledge, based on what it determines you are or should be doing.
Before we continue, there will be references to elements on Figure 2. An understanding of these figures is not necessary for our purposes here but may be helpful to some, or at least help you know you’re not missing anything. So, before we continue, here’s Figure 2:
Section 66 is fairly straightforward: the patent reads that machine learning and/or AI will be used to determine the most likely tasks being performed based on past models of behavior and to weight that against their impact on performing a future task. This includes establishing likely tasks and needs associated with that future event to determine if the user has completed all things that need to be completed prior to engaging in that future event. This may include things like stopping at the gas station to fill up the night before a flight when the flight is scheduled to take off at 6:30 in the morning.
It’s worth noting that in other sections not included here, these machine learning and AI systems are constantly training in understanding global patterns, as well as patterns unique to individuals.
Here’s my favorite part and where it all comes together for Google. In this section, we see a user dropping off dry cleaning, and the system has predicted it is for an event the user has the next day. Knowing the dry cleaner is closed the following day, Google employs the corrective action of suggesting a nearby dry clearer that will be open the following day. I imagine that suggestion, which would likely be auditory, would sound something like:
Google Assistant: The dry cleaner you are arriving at will be closed tomorrow. Is this for the event tomorrow night?
User: Yes it is.
Google Assistant: Billy’s Dry Cleaning is two blocks away. Would you like me to enter that as your new destination? (Insert a subtle notice on the display the user isn’t actually looking at because they’re driving, indicating Billy’s Dry Cleaning has paid for the ad.)
I don’t know that they would actually hide the ad notification, obviously, but this is one of the areas where Google could easily influence buying decisions based on AdWords investment.
This section is incredibly important in function and utility. In this section, we see the system able to understand the less formal aspects of human life, such as unofficial times to be at events. I presume this would include considering the average time it takes to get through customs on arrival at LAX to have an Uber waiting for me as I’m leaving, but not sitting there for 30 minutes because it was ordered based on the time my plane was scheduled to land.
The stage is set
In the patent summary above, we’ve set the stage. In part two (to be published next week), we’ll be looking at a second patent that involves guided purchasing. That is, Google predicting the type of information you’ll need to make a purchase based on general user behavior combined with your own personal patterns and past purchases and “guiding” you to the “right decision.” It’s an extremely exciting patent for paid search marketers — as implemented by Google, it would add enormous control over your ads and how and when they’re triggered.
But before going there, we needed to understand this first patent and how it’s designed to influence users’ core behavior. Soon, we’ll see how this all comes together into one glorious monetization strategy for Google, with more ads displayed and more ads selected by users at critical decision-making points.
The post Patent 1 of 2: How Google learns to influence and control users appeared first on Search Engine Land.
via Search Engine Land http://ift.tt/2o01NVU