Mapping the space of social media regulation

Social media platforms mediate a significant fraction of human communication and attention. The impact of social media on society has been under increased scrutiny, and concerns over its effects have motivated varied and sometimes contradictory government regulation around the world. In this review article, we offer two ways of mapping the space of social media regulation: viewing social media either (i) as an architecture impacted by design choices, or (ii) as a market governed by incentives. We survey the most prominent regulatory approaches globally (both enacted and proposed), with an emphasis on the United States and the European Union, and position these options within the two maps. We conclude by discussing the fundamental trade-offs associated with different interventions, comparing jurisdictions, and highlighting paths forward in the context of the potentially-conflicting rights and interests of relevant stakeholders.

friends socially in-person" [2].While it can be difficult to rigorously measure the effects of social media on society [3,4], concerns over its uses and impacts (including impacts on mental health [5], social cohesion [6], and violence [7]) have motivated many attempts to regulate social media platforms.Recently, studies have also pointed at the positive impacts of online social spaces [8], which at their best can facilitate community creation and other communicative interests [9,10].Social media companies themselves make efforts to mitigate harms and promote positive experiences online [11].This review aims to provide high-level structure with which to navigate the space of regulatory approaches and, in so doing, assist policymakers with prioritization.It aims to be descriptive, not prescriptive; we provide high-level guidance as to what kinds of intervention are likely to be effective at achieving different outcomes, but do not advocate for any particular intervention.

Standpoint: Acknowledging standard economic arguments
for the efficiency of free markets, we note that current platforms (a) form an oligopoly, (b) benefit from network effects and high switching costs, and in many cases (c) must balance the interests of ordinary users with those of advertisers and other stakeholders [12,13].Further, unlike industries that face tangible catastrophic risks and with direct market incentive to design for safety (e.g., aircraft manufacturing), social technologies blend with our environments in intricate ways which, in some cases, corporate platforms have little incentive to causally measure and account for.For these reasons, we think it is important to explore how evidence-based regulation might further promote the public interest online.At the same time, we acknowledge that balancing business operations and economic activity against public interests and individual rights is fraught, and no solution perfectly reconciles fundamental trade-offs.identify approaches that have received less attention.This analysis illustrates that while platforms can be regulated within either model, different kinds of intervention will be more effective at achieving different outcomes [19].
Scope: This review focuses on regulation of for-profit, large-scale platforms that facilitate the dissemination of user generated content, usually described as "social media."Examples include Meta's core products (Facebook, WhatsApp, and Instagram), Snapchat, TikTok, YouTube, Reddit, X (formerly Twitter), and LinkedIn.We focus on regulation by governments (including supranational entities like the EU, regulatory agencies, and courts), but also include some platform design standards from civil society and social media companies themselves.In the body of the review, we focus on regulatory options that are "live," in the sense that they have been enacted or proposed and not otherwise disqualified on legal grounds (e.g., for being unconstitutional).The Appendix additionally lists some noteworthy non-U.S./EU regulations, as well as disqualified proposals, most commonly U.S. regulations that were deemed to violate the First Amendment.

Maps of social media regulation
In this section, we describe two maps of the space of social media regulation, which respectively describe social media (i) as an architecture, or (ii) as a market.The two maps are closely related (the market operates using the architecture as a substrate), but suggest different ways of understanding what regulation of social media is intended to do (improving design choices, or correcting for market failures).While there are other frameworks that focus on differences between individual platforms (see [20]- [22]), the architecture and market maps proposed here focus on the social media industry as a whole.Together, they provide a framing theory of change for how different regulatory interventions can be expected to improve societal outcomes and high-level guidance on when different types of intervention are most appropriate.
Social media as an architecture: Social media platforms can be viewed as a set of interacting components, usually including systems for creating accounts, producing content (both organic content and advertisements), recommending content, viewing content, targeting ads, private messaging, and storing data (see Figure 1).The design of these technologies determines what actions are available to users and other stakeholders and the relative effort required to perform these actions.Each of these factors influence people's behavior, and so design choices are a form of "intrinsic governance" [10,23].Both platforms and academics conduct experiments to measure the impact of different design choices [24], and in many cases this body of knowledge makes it possible to recommend or prescribe design choices for specific technologies that would mitigate bad outcomes [25].As such, policies and laws could deliberately engage with specific features and their associated downstream effects (e.g., by adding friction [26]).A growing field, "regulation by design," seeks to incorporate functional

Users Advertisers
Figure 1: Social media as a set of affordances.Note that this is a general (but necessarily incomplete) framework -for specific platforms, some components are more or less relevant.constraints, especially relating to privacy and data protection [27,28], and some have argued that regulation of platform design, rather than speech, can sidestep First Amendment concerns [26,29].

Affordances:
In order to support regulators' efforts to set priorities and scope intervention based on the impacts they hope to have, we introduce a simple mapping that connects the social media features that can be regulated with areas of impact (see Table I).We briefly expand on some of these connections below.To be clear, this list is not comprehensive and there are important differences in how products implement features.We highlight features generally included in social media platforms, as well as related harms that can be mitigated and benefits that can be promoted through design.These identified harms/benefits similarly represent a simplification: because product features are interrelated, changes to most features engender downstream effects.Nonetheless, our intention is to identify especially high-leverage features central to addressing a specified harm or benefit.
• Account creation: Several misuses of social media rely on people making accounts who should not, including foreign state actors trying to influence the views of domestic audiences [49], scammers creating networks of centrally controlled fraudulent accounts [50,51], or underage users creating accounts before they reach the age required by a platform's terms of service [52].To address these types of challenges, effective regulations must induce interventions designed to mitigate the effectiveness of creating or managing these types of accounts-whether via rate limiting for new accounts, verification systems such as with a valid credit card, or real name policies.
• Content creation: While the publication infrastructure of social media platforms has democratized access to large audiences [53], such widespread access has also bypassed gatekeepers and in some cases led to the creation of harmful content at scale [54].Some types of content are affordance area of impact regulatory approaches

Authenticity & Trust
Ban online impersonation -CA PC §582.5 (2011) -TX PC §33.07 (2011) Require account validation (analogous to Know Your Customer rules [30]; can use third-party identity verification services) • Delays or rate limits for new accounts [31] • Require proof of personhood [32,33] Child Safety Age restrictions or verification DSA Art.

Social Trust
Taking preventive and reactive measures to mitigate risks of illegal or manipulative use of the services DSA Art.34 (2022) Have recommender system promote highly-ranked content systems [40] • Nudge users before sharing [41,42] • Nudges for users to pay attention to accuracy [41] Ad Targeting

Social Disruption
Transparency on ads nature, intent, audience and reach DSA Art. 26 (2022) FCC disclosure rules for social media influencers engaging in paid political speech [45] Data Storage Privacy Time limits on storage [46,47]; Limits on transfer/sale of data GDPR (2018) APRA (proposed) (N/A) • Limits on geographic size of targeting GDPR (2018) • Limits on 3rd party data usage [48] Analogous to financial data protection rules (e.g.U.S. Fair Credit Reporting Act (1970), HIPAA (1996))

TABLE I:
Architecture Map -This table provides a non-exhaustive mapping of regulatory options grouped by affordance and associated harm (we focus on harms here and discuss benefits in the description above).Interventions with current legal force are shown in black; all other regulatory options are shown in gray.We emphasize that this analysis is descriptive and the inclusion of any given intervention in this table is not an endorsement or recommendation.illegal in many jurisdictions, most commonly child sexual abuse material (CSAM), coordination of sex trafficking or terrorism, and unauthorized use of protected intellectual property (IP).Some jurisdictions also ban other kinds of speech, such as Germany's ban on Holocaust denial [55].
To avoid hosting such content, platforms maintain systems capable of rapidly classifying individual posts.Doing so requires classifiers, whether automated (e.g. using CSAM databases [56]), using manual human review, or some combination thereof.
• Content viewing: Every platform presents content differently, but some associated features-such as defaults and settings associated with autoplay or infinite scrolls-may have direct effects on user welfare (e.g., by facilitating addiction or sleep loss).Such design choices might be regulated without impinging on speech rights.
• Private messaging: Online platforms often allow users to create private channels in which members can message and interact.Such groups have been found to both promote supportive communities [57] as well as extremists [58].
• Recommender systems: Recommender systems allow users to manage information overload, matching them with content deemed most relevant.Such systems have, for instance, prompted scientists to rely on social media for professional networking and dissemination of research [59].These same tools can also disproportionately recommend (and hence incentivize the creation of) certain kinds of content, such as content that is inflammatory or divisive [60]- [62].
• Ad targeting: Certain advertising tactics or approaches violate anti-discrimination rules, such as racially-based targeting for housing [44] or age discrimination for employment opportunities [63].Other challenges, such as the use of advertising to promote fraudulent elder abuse similarly would require regulation that engages with technology for advertising [64].market makers or brokers (Figure 2), a perspective that lends itself to viewing negative effects of social media as market failures (including externalities), and regulations as market interventions to alter incentives and correct for those failures [68,69].In this reading, social media may be particularly prone to unrecognized externalities because end users generally do not feel any cost friction (i.e.users typically access products for free) [70].
Information goods, defined as "anything that can be digitized" [71], began to be considered by economists alongside more tangible resources like land, labor, and capital in the 1980s [72,73].Information, including social media "content," is distinct from ordinary goods in that it is an experience good (one needs to consume the information before one can assess its value), has near-zero marginal costs of production (e.g., once a YouTube video exists, each additional view incurs only negligible costs for YouTube), and is in many cases a public good (non-rivalrous and non-excludable) [71].On social media, producers of information-including professional content creators, lay users, and advertisers-exchange information for money and attention, as well as the social capital that attention often represents [74,75].
Attention, a contested word [76], can be defined for our purposes as "the selective allocation of a scarce, rivalrous mental resource to some information-processing tasks to the exclusion of others" [77].While aspects of attention have long been considered by economists [78], over the last half-century attention itself has been recognized as a valuable and scarce resource [79], with some claiming that attention has become "one of the most pivotal resources constraining both production and consumption in the modern economy" [77].On social media, users "produce" attention, which is then "consumed" by content creators and advertisers, who are all in competition with each other, as well as with other possible objects of attention, including sleep [80].Advertisers purchase proxies for attention, such as impressions or clicks [81,82].Video creators on platforms like YouTube are compensated based on the quantity of time they can induce viewers to spend on their videos, a phenomenon that has led to longer videos suitable for multiple ad breaks [83].These competitive dynamics can be modeled using standard economic concepts including market equilibria [84], elasticities [85], and consumer surplus [86].
Social media, viewed as a market for attention, can be subject to negative externalities of at least three kinds: the creation of false or slanted beliefs (e.g., by disproportionately allocating attention to factually false or misleading content), opportunity costs (e.g., by distracting users with clickbait from more rewarding uses of their attention), and direct hedonic costs (e.g., by creating negative social comparison to unrealistic body images or life outcomes) [77].At the same time, some research reports hedonic benefits from receiving attention on social media [85,87]- [89].
Information markets, including social media, are often subject to two kinds of pathology [71].The first is information overload.It was observed in 1984 that the quantity of information produced grows exponentially, but the amount of information consumed grows at most linearly [90].This fundamental imbalance continues to be present on social media and will likely deteriorate further as generative AI reduces the cost of content production [91]- [93].The second pathology-variously called Gresham's law of information [71,94], Brandolini's law [95], or the "bullshit asymmetry principle" [96]-refers to the fact that low quality information generally crowds out high quality information, and this phenomenon is frequently true on social media [97]).

Market interventions:
There are standard categories of market intervention that can be used to mitigate different kinds of market failure.For example, negative externalities of consumption occur when a good is consumed more than is socially optimal because social costs associated with its consumption are not accounted for within its market price.To correct for this over-consumption, the government can use taxes (which increase the price to reflect the social costs), price controls (which can achieve the same by setting a minimum price), or property rights (to allow those who bear the social costs to seek compensation).Other kinds of interventions (including direct provision, ratings, liability, competition law, as well as various compliance measures) can be used to address other kinds of market failure (see [98] for an introductory text).
Social media is itself a market, so regulatory interventions into social media can be categorized by the type of market intervention they represent (see Table II).Some existing regulations, such as intellectual property rights [99], rights of individual ownership of data [100], or antitrust enforcement actions [101], are naturally described as market interventions.Other kinds of regulation are not normally described using economic language, but can nevertheless be recognized as market interventions, whether literally or by analogy.
• Taxes + subsidies: When there are social costs or benefits associated with production or consumption of a good that are not accounted for in its market price, governments can use taxes or subsidies to shift the price toward its socially optimal level.Taxes can also be used for redistribution.While such interventions are not currently in favor in the social media context, existing proposals include taxing digital ad revenue (California A.B.2829, 2024), or platform income generally (California S.B.1327, 2024), perhaps to fund journalism or mental health research [103].A more targeted proposal is to tax platforms according to the degree to which they contribute to societal division [102].To the extent fines currently exist (e.g. in Germany's Network Enforcement Act, 2018, an anti-hate speech law targeting Holocaust denialism [114]), they can also effectively act as a tax on the type of activity the seek to penalize.
• Controls: Price ceilings are maximum prices set to protect consumers and price floors are minimum prices set to protect producers.To understand what such price controls equate to in the context of social media, consider what it would mean to implement them in the market depicted in Figure 2. Information is "paid for" with attention, so regulations that place limits on the amount of attention required to obtain certain information are effectively a price ceiling.In this line of thought, regulations exist to establish a floor on information quality or acceptability (in order to exclude harmful content or incentivize transparency from paid advertisers).Similarly, market mechanisms can protect users from needing to use too much attention to access the information they want-for example, by clearly labeling the source of information available and whether content is organic or paid advertising.
• Direct provision: Direct provision occurs when a government directly provides a good that is under-provided by the free market.In the social media context, governments can and do act as information providers, such as through the accounts of publicly-funded media organizations [115].
There are also precedents for governments directly procuring certain kinds of attention, such as when law enforcement agencies co-opt telephone networks to send emergency notifications, and similar interventions are conceivable on social media.
• Ratings: Publicly rating the quality of providers in a market can create a reputational incentive for those providers to improve.In the social media context, civil society and industry groups already publish ratings of both information providers (e.g., NewsGuard [106]), and attention providers (e.g., the Media Rating Council brand safety ratings for advertising platforms [108]).
• Property rights: Assigning property rights can correct for negative externalities by giving the parties that are harmed the right to seek compensation, effectively raising the costs for the infringing party and deterring them from engaging in the harmful activity.In the social media context, rights to intellectual property and some forms of privacy are already widely legislated.More recently, it has been argued that attention markets currently violate a (proposed) right to our own attention, variously characterized as attentional intervention description regulatory approaches

Taxes + Subsidies
Tax or subsidize to internalize externalities of information production or consumption.

Property Rights
Grant rights to those infringed by the exchange of attention + information.
Rights for owners of intellectual property Dir.2019/790 (2019) DMCA (1998) • Rights to privacy GDPR (2018) • Rights to have personal data erased -"right to be forgotten" GDPR Art. 17 (2018) Right to be cared for by platforms -"duty of care" KOSA (2023) • Right to allocate our own attention [110] • Right to be compensated for behavioral data -"data dignity" [111,112] • Restriction of data transfer without consent and right to access own data APRA (proposed) (N/A)

Liability
Clarify liability for unwanted exchange of attention + information.

Competition Law
Promote competition b/w information providers.
Improve content moderation dispute processes DSA Arts.17, 20-21 (2022) • Interoperability [113] DMA Art.property rights [77], freedom of impression [116], or a right to mental integrity [110].Other proposals explore expanded rights to data ownership [91], or reducing an emphasis on speech rights in favour of a right to for individuals to influence decisions that affect them [117].
• Liability: Assigning (or granting immunity from) liability for certain kinds of harms can change the costs of engaging in activities that cause those harms, and thus influence the amount of that activity that takes place in a market.
In the social media context, platforms are currently broadly protected from liability for hosting user generated content (e.g., Sec.230 of the U.S. Communications Decency Act [CDA], 1996), but there are other conceivable liability regimes [118].Similar thinking by analogy enables us to categorize other existing regulations as different kinds of market intervention, as can be seen in Table II.

Discussion
This article has described two high-level maps for understanding the space of possible regulatory approaches.Below, we discuss (i) the relative merits of each map, and (ii) how they can be used to understand existing regulatory landscapes.We also (iii) review common trade-offs faced in social media governance, and (iv) discuss the merits of evidence-based regulation.
Architecture or market?Both the architecture and market maps are representations of the same underlying social phenomena and, at least in theory, any given intervention can be described in the language of either map.In practice, however, we believe the two maps are more appropriate for different situations and different classes of harms.
The architecture map best captures prescriptive interventions, particularly design codes and standards, which specify how certain technologies should (or, more likely, should not) be built.In recent years, debates over social media regulation have shifted from a focus on content moderation to a focus on design [119,120], and increasingly there is an evidence base to support specific design interventions [121].
The market map, by contrast, is more appropriate for situations where harmful externalities can be identified, but the specific changes needed to induce change are highly diverse, context-specific, and generally involve interactions between many different actors operating within the market and not just the social media company itself.In such cases, instead of prescribing design choices, regulators could intervene to incentivize a desired outcome, allowing market participants to figure out how best to respond.For example, regulators could specify which harmful impacts a platform must monitor and report publicly and let subsequent reputational pressure determine what steps platforms take to reduce those harms [122].
Applying the maps to existing regulations: Both maps can be used to compare jurisdictions.For example, it is widely acknowledged that the EU has gone further than the U.S. in regulating social media [123], a reality documented by Tables I and II.At the time of writing, the EU has implemented certain interventions more forcefully (e.g., compare the EU General Data Protection Regulation [GDPR], 2018, to the patchwork of U.S. state-based privacy regulations), and implemented other kinds of intervention that have no parallel in the U.S. (e.g., the data access provisions of the DSA, 2022, or competition rules in the DMA, 2022).
The architecture map can additionally be used to assess the comprehensiveness of regulation in a given jurisdiction, because there are risks and harms specific to each architecture feature which regulation should (arguably) mitigate.To date, the bulk of successfully-enacted legislation has focused on data storage (mostly privacy protections) and content creation (mostly prohibitions on terrorism, CSAM, and other categories that are overwhelmingly considered harmful-see U.S. Code Ch. 110, 2024).By contrast, technologies for account creation, content viewing, recommender systems, and ad targeting have been subject to fewer legal requirements, though that is beginning to change.In particular, there are many recent attempts to regulate recommender systems (e.g., user controls in the U.S. Kids Online Safety Act [KOSA], 2023) with some regulations enacted (e.g., the non-profiling option required by DSA Art.38, 2022).In the meantime, absent new policy, law enforcement has brought cases using older more instrumental rules, such as anti-discrimination [44] or anti-trafficking rules [124].
It is less meaningful to assess coverage with the market map, because the different kinds of market intervention are in some cases alternatives that could equally address a given market failure.For example, a surplus of information about irrelevant historical allegations could in theory be addressed by a ban or tax on such content, instead of a "right to be forgotten" (GDPR, 2018).However, the map does allow us to see what kinds of market intervention are most common.Currently, compliance measures including requirements for transparency, data access, record-keeping, audits and risk assessments are common, in part because of (a) the need to better understand the impacts of platforms [4], and (b) because such measures are somewhat orthogonal to free expression (though platforms have challenged even reporting requirements in the U.S. on the grounds that they constitute compelled speech and thus violate the First Amendment [125]).By comparison, other kinds of market intervention have been relatively underexplored, and some-such as those focused on attention rather than information, and the rights' people have with respect to their own attention-present opportunities.
Trade-offs: There is no silver bullet: regulation-and by extension almost any externally-imposed design choice or market incentive-comes at a cost [126].We discuss here a few of the prominent trade-offs at stake in these decisions.
First, there is an inherent tension between transparency and user privacy [127].For example, when a platform is required to report on aggregate user activity or share data with researchers, this inherently creates more avenues through which private information might be accessed or misused.There are ways to responsibly navigate this trade-off, such as using differential privacy or remote query evaluation when providing access to researchers.But adhering to such privacy-respecting protocols nonetheless constrains the forms of transparency that are possible.
Second, there are a number of trade-offs associated with speech rights (which differ by jurisdiction).In some cases, protecting one party's right to post certain information or have that information distributed (e.g., the publication of a public figure's home address) can cause harm to another party.More specific to the American context is the fact that companies' own speech is protected under the First Amendment.This includes "editorial" decisions such as what appears on its platform and what is promoted, but some recent legal arguments have sought to extend this protection to the code on which the platform runs.If this interpretation becomes standard, it is possible the U.S. government might not be able to place any restrictions on recommender systems.How and when technical code is protected as speech, and under what types of review those restrictions should be subject to, is central in understanding what types of regulation will be deemed constitutional [128].
Third, most interventions aimed at curbing the harms of social media may come at the expense of certain economic benefits, either for users of these services or the services themselves.For example, constraining a platform's ability to personalize ads in the name of privacy will reduce the effectiveness and hence the price at which it can sell those ads.One of the appeals to taking a market-based approach to regulation is that it suggests we can directly assess the distortionary effects on the market being regulated.To the extent the value of mitigating externalities is smaller than the value lost via the regulation itself, regulators may choose not to intervene.Of course, such decisions need to weigh disparate effects on sub-populations, and there may be design interventions more suitable for differentiating such effects, such as via usage-based restrictions or greater rules for age-based gating in the case of concerns like harm to children.

Evidence-based regulation:
In contrast to some regulatory domains, social media is particularly suited to quantitative measurement, which creates opportunities to scientifically evaluate the impact of regulatory interventions and ensure they are having the desired effect.Such an evidence-based approach to regulation can avoid allocating public resources to enforcing ineffective or counter-productive laws and avoid the state placing unjustified constraints on individual liberty.
As a cautionary tale, in an impact assessment of the EU GDPR, only five articles were found to have an actual impact in lawmaking.Most of these are enforced reactively, "in response to privacy violation," and the largest fines are related to "security incidents" that have already occurred [129,130].The actual impacts of such regulations on market dynamics remain poorly understood and are sometimes counter-intuitive [131].

Conclusion
In this review, we proposed two high-level maps for understanding the space of approaches to regulating social media.The first map views social media as an architecture impacted by design choices and categorizes regulatory approaches according to the affordance (e.g., account creation, ad targeting, and so on) which they most closely impact.In keeping with the idea of "regulation by design," this map is most suited for describing prescriptive interventions that stipulate best-practice design choices, where there is an evidence base to support such guidance.The second map views social media as a market (for information, attention, and money) governed by incentives and categorizes regulatory approaches according to the form of market intervention they represent.This map is most suited for describing approaches that seek to constrain the set of possible outcomes, but leaves it up to platforms and other market participants how best to achieve those outcomes.While the maps are merely descriptive, they can be used by policymakers to understand what approaches are available, how regulations differ across jurisdictions, and what kinds of intervention are most likely to achieve the outcomes they are interested in.

Open Access
This MIT Science Policy Review article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this license, visit http://creativecommons.org/licenses/ by/4.0/.

Appendix
This appendix includes all the regulations considered for this review, grouped by jurisdiction.These were drawn from three policy trackers [14]- [16], which are (at the time of writing) actively maintained.The list of regulations, along with a more extended set of classifications, can be found online.The Commission is empowered to issue binding non-compliance decisions that order gatekeepers to cease and desist by an appropriate deadline.

European Union
Enacted -Article 30: 'Fines' In the event of a non-compliance decision or failure to comply with obligations or commitments, the Commission may impose fines ranging from 1 to 20% of the gatekeeper's worldwide turnover, with limits depending on the violation.

Enacted -Article 31: 'Periodic penalty payments'
Enables the Commission to issue periodic penalty payments of up to 5% of a gatekeeper's daily worldwide turnover to compel gatekeepers to comply with various decisions, inspections, commitments, and information requests.
Enacted -Article 32: 'Limitation periods for the imposition of penalties' Fines are limited to a five year period, starting either on the day on which an infringement is committed or, in the case of repeat offences, on the day of the last infringement.
Enacted -Article 33: 'Limitation periods for the enforcement of penalties' Further clarification on enforcing fines within a five year period.Enacted -Article 34: 'Right to be heard and access to the file' The Commission will give gatekeepers the opportunity to review and comment on preliminary findings before a decision pursuant to various other articles is issued.The Commission will submit a yearly report on the implementation of the Act and its progress to the European Parliament.
Enacted -Article 36: 'Professional secrecy' Information collected for investigations and other purposes will be used only for purposes of the Act and will not be disclosed.

TABLE II :
Market Map -This table provides a non-exhaustive mapping of regulatory approaches grouped by type of market intervention.Interventions with current legal force are shown in black; all other regulatory options are shown in gray.We emphasize that this analysis is descriptive and the inclusion of any given intervention in this table is not an endorsement or recommendation.
Competition law includes a broad set of interventions aimed at promoting competition between providers and avoiding concentrations of market power that would allow providers to act in ways that harm the public interest.In the social media context, this includes laws requiring interoperability (e.g., Art.7 of the EU Digital Markets Act [DMA], 2022) and processes for disputing content moderation decisions (e.g., Arts.17, 20-21 of the EU Digital Services Act [DSA], 2022).
• Competition law:• Compliance: Compliance measures include a broad set of requirements for firms, mostly related to due diligence or transparency.In the social media context, these include requirements to provide researchers with access to data (DSA Art.40, 2022), to conduct audits and systemic risk assessments (DSA Arts.35 & 37, 2022), and to maintain records about ads and advertisers (DSA Art.39, 2022), among others.
The Commission has opened non-compliance investigations under the Digital Markets Act (DMA) into Alphabet's rules on steering in Google Play and self-preferencing on Google Search, Apple's rules on steering in the App Store and the choice screen for Safari and Meta's 'pay or consent model'" Prevents children under 14 from accessing social media entirely and requires parental consent to create social media accounts for those aged 14-15.Establishes further rights for IN residents in regards to their data and further responsibilities for companies storing or processing their data.Bans social media platforms from 'using certain practices or features that cause child users to become addicted' to them.Bans certain forms of data collection and targeted advertising and requires social media companies to conduct assessments of the impacts of their products on children.Officially signed into law in September 2021, this bill sets up complaint processes and disclosure obligations for social media platforms concerning content management and removal.Notably, it forbids social media companies from censoring (banning, demonetizing, or otherwise restricting) user content based on the user's or another person's viewpoint, the perspective expressed, or the user's geographic location within the state of Texas.Australia enacted the Online Safety Act of 2021, which aimed for not only protections for children but also adults, specifically for those targeted for cyber-abuse.