Bing Chat: Microsoft deploys a very promising feature to knock Google down

Bing Chat now supports multimodal search. Users can submit images to the chatbot which will be able to analyze them and answer questions about them. Microsoft does not hesitate to walk on Google’s platforms.

bing chat


Bing Chat keeps developing new image-specific capabilities. The latest one is called “Visual Search”. It enables users to feed photos into their prompts, as the name would imply. After then, the chatbot examines them to respond to user inquiries.  After a successful test, this new image recognition feature is finally available to all users.

“Based on OpenAI’s GPT-4 model, Visual Search allows you to download images and search the web for linked content. Take a picture, or use one you picked up elsewhere, and ask Bing to tell you about it. The chatbot can capture the context of an image, interpret it and answer questions about it,” Microsoft says on its blog.


Bing Chat can now recognize images

Microsoft is attacking Google, which had already deployed a similar tool in 2021 called MUM. Thanks to this, the search engine can rely on an image to provide a fuller answer. With Visual Search, Bing users can upload an image and have AI interact with it. Enough to open the field of possibilities!

Microsoft provides a preliminary drawing of a hand-drawn user interface in a demonstration film. He then asks Bing to provide HTLM and CSS code based on the drawing. In a few seconds, the chatbot generates dozens of lines of code and manages to generate a functional HTLM program whose design closely resembles the reference drawing. Pretty bluffing, isn’t it?


There are many other ways to exploit Visual Search on Bing Chat. For instance, you may photograph a structure and research its historical background.

 Ask for a suitable recipe by taking a photo of the ingredients you have on hand. Or get tips for repairing a damaged device.

Post a Comment

Previous Post Next Post