Facebook announces Ego4D AI project to collect first-person view data

Over the years, Facebook has made a number of investments in their artificial intelligence (AI) technology. The company has now announced the launch of a new long-term project called Ego4D, which aims at improving the challenges that AI faces around egocentric perception.

One of the problems that Facebook identified for training their AI is that the data set consists largely of photos and videos that are in third-person. This can introduce a bias in the results that may be problematic for AI solutions that are meant for first-person view.

This is the problem that Facebook’s Ego4D project aims to tackle. The company has partnered with 13 universities and labs across nine countries, collected more than 2,200 hours of first-person video in the wild, featuring over 700 participants going about their daily lives.

The project will work on improving their AI around 5 benchmarks, including:

  • Episodic memory: What happened when? (e.g., “Where did I leave my keys?”)
  • Forecasting: What am I likely to do next? (e.g., “Wait, you’ve already added salt to this recipe”)
  • Hand and object manipulation: What am I doing? (e.g., “Teach me how to play the drums”)
  • Audio-visual diarization: Who said what when? (e.g., “What was the main topic during class?”)
  • Social interaction: Who is interacting with whom? (e.g., “Help me better hear the person talking to me at this noisy restaurant”)

Since the project is long term, it is unclear as to when regular users will actually benefit from this project. Facebook will likely introduce features in their AI over sometime that will be made possible with this research project.