Facebook is big and that is probably unarguable more so because it has half of the internet using world on its platform. Every minute of the day, about 4.1 million likes are recorded on posts by individuals and brands. This is a lot of posts and likes per day but what do you do with this deluge especially considering that you may not need much of that information or how easily can you recall a post that aligns with your interests? In the past Facebook developed algorithms to help you better manage posts like tweaking its News Feed algorithm to display more posts from your friends above posts from pages. This week though, Facebook announced something it calls Deep Text.
DeepText is an Artifice Intelligence (AI) that be able to understand sentiments and meanings behind posts by users. This means that in future, you would most likely see posts that align closely with your specific interests based on what you post and like on Facebook. This was announced in a blog post on Wednesday. Using the Facebook definition of this new AI machine, it a “deep learning-based text understanding engine that can understand with near-human accuracy the textual content of several thousands posts per second, spanning more than 20 languages.”
This has the potential to change the way we search for items on Facebook and eliminate spam. Facebook has trillions of data in posts, photos and comments and this has a huge potential for DeepText which will be able to sweep through this huge database to provide you the right information you need through intelligent learning as par interests to provide you with the right information.
“In traditional NLP approaches, words are converted into a format that a computer algorithm can learn. The word “brother” might be assigned an integer ID such as 4598, while the word “bro” becomes another integer, like 986665. This representation requires each word to be seen with exact spellings in the training data to be understood.
With deep learning, we can instead use “word embeddings,” a mathematical concept that preserves the semantic relationship among words. So, when calculated properly, we can see that the word embeddings of “brother” and “bro” are close in space. This type of representation allows us to capture the deeper semantic meaning of words.
Using word embeddings, we can also understand the same semantics across multiple languages, despite differences in the surface form. As an example, for English and Spanish, “happy birthday” and “feliz cumpleaños” should be very close to each other in the common embedding space. By mapping words and phrases into a common embedding space, DeepText is capable of building models that are language-agnostic” says Facebook in the post.
Besides the search potential of this machine, there’s the capability of pointing you in the direction to use specific tools Facebook provides for you to carry out that task successfully. So for example you post something about going to see a hit movie this weekend, the machine would then be able to learn that quickly and point you eventually towards apps and other useful posts on that subject to eventually help you based on your location. The location part of it is important in order to further streamline discussion around your area.
When Facebook launched its Graph Search in 2013 in partnership with Microsoft’s Bing which it later dropped in 2014, it was all based on algorithm that had the capability of accessing its Big Data to provide efficient search results that revolved around Facebook. Some had referred to it at the time as the potential Google Search competitor but Facebook has continued to refer to it as its own search and not the entire web. That said, combined with DeepText, Facebook has the potential to provide trillions of data ranging from photos, videos and articles to users who use this service in future.