Meta Abandons ‘Responsible Innovation’ Team as Metaverse Development Continues to Ramp Up | Social Media Today

As Meta continues to build its next-level, more immersive social experience in the metaverse, now seems like the time that it really should be focusing on responsibility, and ensuring that it’s developing these new spaces with safety and security for all users in mind. Right?

I mean, if you’re asking people to spend more of their time in fully immersive, enclosed, reality-warping headsets, that seems like it’s going to have a more significant mental health impact than regular social media apps.

It’s concerning, then, that today, The Wall Street Journal has reported that Meta has abandoned its ‘Responsible Innovation’ team, which had been tasked with monitoring and addressing concerns about the potential downsides and negative impacts of its various products.

As per WSJ:

“The [Responsible Innovation] team had included roughly two dozen engineers, ethicists and others who collaborated with internal product teams and outside privacy specialists, academics and users to identify and address potential concerns about new products and alterations to Facebook and Instagram.”

Which Meta has seemingly never done too well on anyway, even with this team in place. Hard to imagine it’s going to improve on this front without these additional checks.

Meta has confirmed the decision to disband the group, while also noting that it remains committed to the team’s goals. Meta also says that most of its Responsible Innovation team members will continue similar work within the company, though it believes that these efforts will be better spent on more ‘issue-specific’ teams.

Of course, it’s impossible to know what this truly means in Meta’s broader development process, and what sort of impact this might have on its future projects. But again, right now, Meta is on the cusp of rolling out its most immersive, most interactive, most impactful experience yet.

It seems like now, more than ever, it needs that additional guidance.

This is a key concern in its metaverse development – that Meta, with its ‘move fast and break things’ ethos, is going to do exactly that, and push ahead with immersive VR development without full consideration of the mental health, and other impacts.

Meta already has history in this respect. It never fully considered, for example, the impact that Facebook data could have if it were to fall into the wrong hands, which is why it worked with academics, like those behind Cambridge Analytica, to provide insights on users for research purposes for years before it became a problem. It never seemed to consider how algorithms could change people’s perceptions if aligned to the wrong metrics, it never thought about how mass influence operations by well-funded political groups could change democratic process, or what impact Instagram filters might have on self-perception, and the mental health of teenagers.

Of course, Meta has learned these lessons now, and it has implemented fixes and procedures to address each. But in each case, action has been undertaken in retrospect. Meta didn’t foresee these as being problems, it just saw new opportunities, with the eternal optimism of Mark Zuckerberg propelling it to new realms, and new paradigms in connection, as fast as it could reach them.

Meta doesn’t use ‘move fast and break things’ as a mission statement anymore it switched tomove fast with stable infrastructure’ in 2014, before eventually morphing several more times, with Meta settling on ‘bring the world closer together’ in 2018.

That sounds more thoughtful – but is Meta, as a company, actually approaching things in a more thoughtful way, or are we going to see the same negative impacts of VR social as we have with every other platform that Meta has rolled out?

Again, Meta has learned lessons, and it has come a long way. But early issues in the metaverse, and the abandoning of its Responsible Innovation team, do raise concerns that Meta will, as always, be more driven by scale than safety, and more inspired by what could be, as opposed to considering who might get hurt in the process.

As we’ll no doubt be shown at next month’s Connect conference, Meta’s metaverse is full of potential, offering entirely new ways to engage in completely interactive and customizable environments, where virtually anything is possible.

Good and bad.

While Meta is very keen to highlight the good, it can’t overlook the opposite, which it has, repeatedly, in the past.

The impacts, in this case, could be far worse, and it’s important that questions continue to be raised about Meta’s development processes in this respect.  

UPDATE: Meta has informed SMT that it’s actually looking to expand its work on responsible innovation, not reduce it, with this change.

This work is more of a priority than ever, not less. We are scaling it by deploying dedicated teams of experts into priority product areas and have more people working on responsible innovation within product teams than two years ago. That’s why the overwhelming majority of former members of this team are continuing with this kind of work elsewhere at Meta.”

Hopefully a more positive signal of Meta’s focus.