IBM and MIT Media Lab Test AI Recommendation Algorithm
Some of those same companies are now asking themselves if they can both use AI to keep the consumer’s attention while also adhering to an ethical framework. IBM Research and MIT Media Lab have developed a recommendation technique that its research scientists say does just that.
VentureBeat reports that the research team, led by IBM Research AI global leader Francesca Rossi, has created a technique that, “while optimizing its results for the user’s preferences, also makes sure it stays conformant to other constraints, such as ethical and behavioral guidelines.” The team demonstrated the technique in “a movie recommendation system that allows parents to set moral constraints for their children.”
Instead, Mattei and the rest of the IBM team defined the rules by examples. “We thought that the idea of learning by example what’s appropriate and then transferring that understanding while still being reactive to the online rewards is a really interesting technical problem,” he said. The team decided to test out the algorithm with movie recommendations “because quite a bit of movie-related data already exists and it’s a domain in which the difference between user preferences and ethical norms are clearly visible.”
The recommendation algorithm is comprised of two training states. In the first, which happens offline, “an arbiter gives the system [appropriate and in appropriate] examples that define the constraints the recommendation engine should abide by.” The algorithm studies the examples and “the data associated with them to create its own ethical rules.” The more examples and data, “the better it becomes at creating the rules.”
The second stage of the training “takes place online in direct interaction with the end user,” whereby the algorithm “tries to maximize its reward by optimizing its results for the preferences of the user and showing content the user will be more inclined to interact with.” The system deals with the potential of conflicting goals between ethics and preferences by setting a “threshold that defines how much priority each of them gets.” In the movie recommendation demonstration, parents were able to use a slider to choose the balance.
See the full story here: http://www.etcentric.org/ibm-and-mit-media-lab-test-ai-recommendation-algorithm/
Pages
- About Philip Lelyveld
- Mark and Addie Lelyveld Biographies
- Presentations and articles
- Tufts Alumni Bio