Google I/O 2018 is this year’s edition of Google’s conference where developers from across the world are brought together to showcase the latest technology that Google is developing and as usual, it is not surprising at all when Google announced some new products, showcasing the progress and improvement the company is making in some of its products.

In this year’s Google I/O, a lot of things were discussed like Artificial Intelligence (AI), Machine Learning, Android, along with some focus on some of the products developed for the Android ecosystem, like Maps, Photos, Google News, Google Assistant and Google Lens. And here’s everything you need to know about the big announcements that were made.

Artificial Intelligence (AI)

Google just revealed at the event how the company has upgraded to TPU v3, the 3rd-generation TPUs (Tensor Processing Units) are liquid-cooled and much more powerful than the previous generation, which allows training and running models faster than ever. Google CEO Sundar Pichai talked a bit providing some excellent uses of AI, and tests that he and his team are conducting which will help facilitate medical attention and provide more time window for the doctors to be prepared for a patient, well in advance. He also explained, how with the help of AI, from a person’s retina model AI could detect various details like age, gender etc. along with alerts for possible medical situations. The use of AI won’t be just limited to medical science, but it will now be used in popular Google Products. For example, Google is making it easier to draft emails, with the help of AI where users will get suggestions based on context and they can just hit tab to fill the email with words, placed automatically. This new feature is called Smart Compose and it will be available in Gmail soon.

Google Photos Can Now Edit Images On Behalf Of You 

The use of AI will now be more evident in Google Photos as well. There will be an option called One-tap actions, and by using this, the user can enhance their photos or share them with friends automatically. or example, if a photo is captured in low lighting conditions, using the One-tap actions, Google Photos will brighten the image. That’s not all- With this new feature, one can make the subject pop out while keeping everything else in grayscale or this can be done in an opposite way as well and is great for bringing back memories as Photos will now let users turn black and white photos, into colourful images. With One-tap actions, users can also send photos of their friends who are in the photo itself to those friends only and the whole process is automatic, and faster them reading and understanding all these by reading these few lines. Though AI is one of the most emphasized topics at this year’s Google I/O, the use of Ai isn’t limited to these few things only. From finding a dogs picture in Google Photos to taking a perfect portrait selfie, where the camera captures everything but keeps the subject in focus and sharp, and perfectly blurs the background. Gmail uses AI to remind you to follow up or respond to messages that are older than two to three days, making sure that you haven’t missed any important emails. Google Translate can translate text on a sign or menu by holding the camera in front of it. Finding the right address is not a big deal as Google Maps has learned to read street names and addresses from billions of Street View images.

Google Assistant Gets Smarter & Better

Google Assistant is the ultimate smart assistant from Google that lets users do many things and get information using their voice. Google says that people ask about weather conditions is as many as 10,000 ways, so with improvements to the understanding of languages, you can speak naturally to your Google Assistant and it will know what you mean. Google has also put a lot of effort in making sure that the voice of the Google Assistant is natural and for that, thousands of hours of voice recordings were required, but now with AI and WaveNet technology from DeepMind, this process takes much lesser time, yielding the same brilliant natural sounding voice. Starting from today, you can choose from six new voices for your Google Assistant. John Legend, an American singer, songwriter and actor will give his voice to the Google Assistant later this year!

Natural back-and-forth conversations are now possible and users no longer need to repeat â€˜Hey Google’ for each follow-up request. This is being called as Continued Conversations and will be rolling out in the next few weeks. The Google Assistant now can also understand many things said to it in one instance. Routines also get an overhaul as now users will be able to add Custom Routines, which allow them to create a Routine with any of the Google Assistant’s one million Actions. A visual makeover has also taken place and now Google Assistant will be able to provide a quick snapshot of the day, with suggestions based on the time of day, location and recent interactions with the Assistant. To provide a summary of tasks and list items, Google Keep will be integrated. Food pick-up and delivery services arrive at the Assistant and now users can order directly from Starbucks, Doordash and Applebee’s, in addition to existing partners like Dunkin’ Donuts and Dominoes.

Google Assistant will also be integrated with navigation in Google Maps later this summer and will help people drive without any distraction while sending text messages, playing music and podcasts, and getting information and all these can be done without leaving the navigation screen.

Android P Sneak Peak

Google has just shared some glimpses of what the next version of Android could look like and how it will enhance the user experience. First of all, with Android P, users will get a feature called Adaptive Battery which will prioritize the battery power for the apps and services that are used most. App Actions is a new feature that helps the user make the next move, in an easier way. For example, if you’re someone who enjoys music from Google Play, as soon as you plug in the headphones the phone will prompt to open the Play Music and resume the track. Android P also focuses on making navigation simpler with gestures on the home screen and it is based on simplicity. Just a home button is marked at the bottom of the display and navigation, works flawlessly with gestures.

A new Dashboard has been added, which provides insights like how you’re using the device, time spent in apps, the number of successful phone unlocks and the number of notifications. Also, to keep yourself away from the social media, you might like the App Timer which sets time limits on apps and will nudge you when you’re close to your limit and then gray out the icon to remind you of your goal. The new Do Not Disturb mode silences phone calls and notifications, along with notifications on the screen.
Digital Wellbeing

Dashboard, App Timer, Wind Down

Android P Beta is now available on Google Pixel. And due to Project Treble, the Android P Beta is now available on non-Google devices as well and the list includes Sony Xperia XZ2, Xiaomi Mi Mix 2S, Nokia 7 Plus, Oppo R15 Pro, Vivo X21, OnePlus 6, and Essential PH‑1.

Google Lens

Today at Google I/O, Google announced that Lens will now be available directly in the camera app on supported devices from LGE, Motorola, Xiaomi, Sony Mobile, HMD/Nokia, Transsion, TCL, OnePlus, BQ, Asus, and of course the Google Pixel. The Google Lens now has a better smart text selection, that connects the visible words with the answers and actions. Copying and pasting text from the real world is now possible and users can copy and paste anything, from food recipes to gift card codes or even Wi-Fi passwords.

Google Lens can now recognize words and provide suggestions easily. Also, Google Lens now works in real time. This is possible with state-of-the-art Machine Learning, using both on-device intelligence and cloud TPUs, to identify billions of words, phrases, places, and things in a split second. The new Google Lens will be rolling out in the next couple of weeks.

Google News Gets Redesigned

Google News is one of the oldest services from Google and in the last 15 years, many things have changed. Now, the new Google Maps is powered by AI and ML, so that users get the news that matters to them. Full Coverage is something that will help users find all the reports from different sources related to any particular headline. The Newsstand tab makes it easy to find and follow the favorite news sources and also, users can subscribe directly using Subscribe with Google platform (launched as a part of the Google News Initiative). This helps users get access to premium content, and it will work on all devices.

Google Maps Gets Better

The redesigned Explore tab in Google Maps will suggest interesting places nearby. Top trending lists like the Foodie List helps users find new restaurants based on information from local experts, Google’s algorithms, and trusted publishers like The Infatuation and others a ‘match’ is displayed which is basically a number that suggests how likely you are to enjoy a place and reasons explaining why. In this case, Machine Learning is used to generate the number based on factors like details of that business, the food and drink preferences you’ve selected in Google Maps, places you’ve been to, and whether you’ve rated a restaurant or added it to a list. Planning gets easier as with a long press on the places can create a shareable shortlist that your friends and family can add more places to and vote on.

An optimist to the core, I always see the glass half full. I like to take life as it comes and not to become too serious on the harsher aspects of it. Apart from this, I am an Engineer, a Blogger & a Researcher....

Join the Conversation

1 Comment

Leave a comment

Your email address will not be published. Required fields are marked *