An important public discourse is occurring about what the “right” regulation is for internet platforms (also called interactive computer services). These discussions address first amendment implications, news moderation, and the monopolization of internet platform ecosystems.* These debates are important and help address why internet platforms should or should not be regulated. From the perspective of protecting free speech while moderating “hate speech,” for example, it does not seem feasible in the next quarter that we can establish a “threshold” of human decency (e.g., through transcendental and transformative discussions) on what is right/wrong/true/false speech and information.

Instead, in this article, let’s view the outcomes of user experiences on internet platforms from the perspective of public health and safety. From this perspective, let’s attempt to define a universally undesirable consequence or negative externality (outcome) caused by the exchange of information on an unregulated internet platform. Against these outcomes, systematic performance goals could be evaluated to determine what is “acceptable” performance, and thereby provide a basis for regulatory oversight that contributes to the mitigation of public safety and public health consequences.

Reducing the Scope

Specific risks require precise regulatory oversight. The conversation on risk needs to be highly specified so that overgeneralized statements about regulation do not obstruct a potential path forward. For the sake of this article, let’s identify one category of risk to frame the discussion.

#social-media #internet-regulation #technology #facebook #risk-analysis #data analysis

Toward Risk-Informed Performance-Based Regulation
1.10 GEEK