Contact Me By Email


What To Do When You're Stopped By Police - The ACLU & Elon James White

What To Do When You're Stopped By Police - The ACLU & Elon James White

Know Anyone Who Thinks Racial Profiling Is Exaggerated? Watch This, And Tell Me When Your Jaw Drops.


This video clearly demonstrates how racist America is as a country and how far we have to go to become a country that is civilized and actually values equal justice. We must not rest until this goal is achieved. I do not want my great grandchildren to live in a country like we have today. I wish for them to live in a country where differences of race and culture are not ignored but valued as a part of what makes America great.

Sunday, October 02, 2022

Is This the Beginning of the End of the Internet?

  

Is This the Beginning of the End of the Internet?

By Charlie Warzel 
    Getty; The Atlantic
    Getty; The Atlantic

    "Occasionally, something happens that is so blatantly and obviously misguided that trying to explain it rationally makes you sound ridiculous. Such is the case with the Fifth Circuit Court of Appeals’s recent ruling in NetChoice v. Paxton. Earlier this month, the court upheld a preposterous Texas law stating that online platforms with more than 50 million monthly active users in the United States no longer have First Amendment rights regarding their editorial decisions. Put another way, the law tells big social-media companies that they can’t moderate the content on their platforms. YouTube purging terrorist-recruitment videos? Illegal. Twitter removing a violent cell of neo-Nazis harassing people with death threats? Sorry, that’s censorship, according to Andy Oldham, a judge of the United States Court of Appeals and the former general counsel to Texas Governor Greg Abbott.

    A state compelling social-media companies to host all user content without restrictions isn’t merely, as the First Amendment litigation lawyer Ken White put it on Twitter, “the most angrily incoherent First Amendment decision I think I’ve ever read.” It’s also the type of ruling that threatens to blow up the architecture of the internet. To understand why requires some expertise in First Amendment law and content-moderation policy, and a grounding in what makes the internet a truly transformational technology. So I called up some legal and tech-policy experts and asked them to explain the Fifth Circuit ruling—and its consequences—to me as if I were a precocious 5-year-old with a strange interest in jurisprudence.

    Techdirt founder Mike Masnick, who has been writing for decades about the intersection of tech policy and civil liberties, told me that the ruling is “fractally wrong”—made up of so many layers of wrongness that, in order to fully comprehend its significance, “you must understand the historical wrongness before the legal wrongness, before you can get to the technical wrongness.” In theory, the ruling means that any state in the Fifth Circuit (such as Texas, Louisiana, and Mississippi) could “mandate that news organizations must cover certain politicians or certain other content” and even implies that “the state can now compel any speech it wants on private property.” The law would allow both the Texas attorney general and private citizens who do business in Texas to bring suit against the platforms if they feel their content was removed because of a specific viewpoint. Daphne Keller, the director of the Program on Platform Regulation at Stanford’s Cyber Policy Center, told me that such a law could amount to “a litigation DDoS [Denial of Service] attack, unleashing a wave of potentially frivolous and serious suits against the platforms.”

    To give me a sense of just how sweeping and nonsensical the law could be in practice, Masnick suggested that, under the logic of the ruling, it very well could be illegal to update Wikipedia in Texas, because any user attempt to add to a page could be deemed an act of censorship based on the viewpoint of that user (which the law forbids). The same could be true of chat platforms, including iMessage and Reddit, and perhaps also Discord, which is built on tens of thousands of private chat rooms run by private moderators. Enforcement at that scale is nearly impossible. This week, to demonstrate the absurdity of the law and stress test possible Texas enforcement, the subreddit r/PoliticalHumor mandated that every comment in the forum include the phrase “Greg Abbott is a little piss baby” or be deleted. “We realized what a ripe situation this is, so we’re going to flagrantly break this law,” a moderator of the subreddit wrote. “Also, we like this Constitution thing. Seems like it has some good ideas.”

    Everyone I spoke with believes that the very future of how the internet works is at stake. Accordingly, this case is likely to head to the Supreme Court. Part of this fiasco touches on the debate around Section 230 of the Communications Decency Act, which, despite its political-lightning-rod status, makes it extremely clear that websites have editorial control. “Section 230 tells platforms, ‘You’re not the author of what people on your platform put up, but that doesn’t mean you can’t clean up your own yard and get rid of stuff you don’t like.’ That has served the internet very well,” Dan Novack, a First Amendment attorney, told me. In effect, it allows websites that host third-party content to determine whether they want a family-friendly community or an edgy and chaotic one. This, Masnick argued, is what makes the internet useful, and Section 230 has “set up the ground rules in which all manner of experimentation happens online,” even if it’s also responsible for quite a bit of the internet’s toxicity too.

    But the full editorial control that Section 230 protects isn’t just a boon for giants such as Facebook and YouTube. Take spam: Every online community—from large platforms to niche forums—has the freedom to build the environment that makes sense to them, and part of that freedom is deciding how to deal with bad actors (for example, bot accounts that spam you with offers for natural male enhancement). Keller suggested that the law may have a carve-out for spam—which is often filtered because of the way it’s disseminated, not because of its viewpoint (though this gets complicated with spammy political emails). But one way to look at content moderation is as a constant battle for online communities, where bad actors are always a step ahead. The Texas law would kneecap platforms’ abilities to respond to a dynamic threat.

    “It says, ‘Hey, the government can decide how you deal with content and how you decide what community you want to build or who gets to be a part of that community and how you can deal with your bad actors,’” Masnick said. “Which sounds fundamentally like a totally different idea of the internet.”

    “A lot of people envision the First Amendment in this affirmative way, where it is about your right to say what you want to say,” Novack told me. “But the First Amendment is just as much about protecting your right to be silent. And it’s not just about speech but things adjacent to your speech—like what content you want to be associated or not associated with. This law and the conservative support of it shreds those notions into ribbons.”

    The implications are terrifying and made all the worse by the language of Judge Oldham’s ruling. Perhaps the best example of this brazen obtuseness is Oldham’s argument about “the Platforms’ obsession with terrorists and Nazis,” concerns that he suggests are “fanciful” and “hypothetical.” Of course, such concerns are not hypothetical; they’re a central issue for any large-scale platform’s content-moderation team. In 2015, for example, the Brookings Institution issued a 68-page report titled “The ISIS Twitter census” mapping the network of terrorist supporters flooding the platform. The report found that in 2014, there were at least 46,000 ISIS accounts on Twitter posting graphic violent content and using the platform to recruit and collect intelligence for the Islamic State.

    I asked Masnick whether he felt that Oldham’s ruling was rooted in a fundamental misunderstanding of the internet, or whether it was more malicious—a form of judiciary trolling resulting from former President Donald Trump getting kicked off of Twitter.

    He likened the ruling to this past summer’s Dobbs v. Jackson Women’s Health Organization, which overturned Roe v. Wade and took away Americans’ constitutional right to an abortion. “You had 50 years of conservative activists pushing for the overturning of Roe, but this Texas ruling actually goes against almost everything the conservative judicial activists have worked for for decades,” Masnick said. “You have Citizens United, Hobby Lobby, the [Masterpiece Cakeshop] case, which are all complicated, but at the core, they are rooted in how to conceive of First Amendment rights. And in all cases, the conservative justices on the Supreme Court have been all about the right to expand First Amendment rights inside organizations, especially the right to exclude.”

    If the case ends up before the Supreme Court, many of the justices would have to decide against their priors in order to uphold the Texas law. Specifically, Justice Brett Kavanaugh would need to directly contradict his opinion in Manhattan Community Access Corp. v. Halleck, a case where Kavanaugh clearly argued that private forums have First Amendment rights to editorial discretion.

    Keller, of Stanford’s Cyber Policy Center, has tried to game out future scenarios, such as social networks having a default non-moderated version that might quickly become unusable, and a separate opt-in version with all the normal checks and balances (terms-of-service agreements and spam filters) that sites have now. But how would a company go about building and running two simultaneous versions of the same platform at once? Would the Chaos Version run only in Texas? Or would companies try to exclude Texas residents from their platforms?

    “You have potential situations where companies would have to say, ‘Okay, we’re kicking off this neo-Nazi, but he’s allowed to stay on in Texas,” Masnick said. “But what if the neo-Nazi doesn’t live in Texas?” The same goes for more famous banned users, such as Trump. Do you ban Trump’s tweets in every state except Texas? It seems almost impossible for companies to comply with this law in a way that makes sense. The more likely reality, Masnick suggests, is that companies will be unable to comply and will end up ignoring it, and the Texas attorney general will keep filing suit against them, causing more simmering resentment among conservatives against Big Tech.

    What is the endgame of a law that is both onerous to enforce and seemingly impossible to comply with? Keller offered two theories: “I think passing this law was so much fun for these legislators, and I think they might have expected it would get struck down, so the theater was the point.” But she also believes that there is likely some lack of understanding among those responsible for the law about just how extreme the First Amendment is in practice. “Most people don’t realize how much horrible speech is legal,” she said, arguing that historically, the constitutional right has confounded logic on both the political left and right. “These legislators think that they’re opening the door to some stuff that might offend liberals. But I don’t know if they realize they are also opening the door to barely legal child porn or pro-anorexia content and beheading videos. I don’t think they’ve understood how bad the bad is.”

    NetChoice v. Paxton is likely an opening salvo in a long, complex, and dangerous legal battle. But Keller offered up a more troubling possibility: This law amounts to a legal speed run that could drastically alter First Amendment law in such a way as to quickly end the battle. “The Supreme Court could strike this down but offer a framework for future litigation that opens the door to new kinds of laws we’ve never seen before,” she said. “Who knows what rule set we’ll be playing with after the Supreme Court weighs in.”

    What does seem clear is that this law is an outgrowth of politicians waking up to the raw power of the internet as a communications platform. Lawmakers’ desire to preserve or destroy content moderation is a battle for the soul of the internet, the limits of free expression, and the direction of our politics. We, the users, are caught in the middle."

    Is This the Beginning of the End of the Internet?

    No comments:

    Post a Comment