Contact Me By Email


What To Do When You're Stopped By Police - The ACLU & Elon James White

What To Do When You're Stopped By Police - The ACLU & Elon James White

Know Anyone Who Thinks Racial Profiling Is Exaggerated? Watch This, And Tell Me When Your Jaw Drops.


This video clearly demonstrates how racist America is as a country and how far we have to go to become a country that is civilized and actually values equal justice. We must not rest until this goal is achieved. I do not want my great grandchildren to live in a country like we have today. I wish for them to live in a country where differences of race and culture are not ignored but valued as a part of what makes America great.

Friday, April 21, 2023

ChatGPT Is Already Changing How I Do My Job

ChatGPT Is Already Changing How I Do My Job

An abstract illustration of unfiltered internet content being transformed into filtered internet content.
Gizem Vural

“Once you start using ChatGPT you pretty much can’t stop. You begin with trivial, gimmicky prompts: Do this math problemTell me some vegetarian recipes with broccoli and peasWhat came first, the jock or the jockstrap?

But as the artificial intelligence chatbot easily dispatches your gimmes, you begin to take it more seriously. Over weeks and months you tinker with it, learn its capabilities and its deficiencies, imagine its possibilities for good and ill, its potential for ubiquity and for indispensability. Soon, ChatGPT starts to etch a groove into your life. Now you think of it differently — less as a dancing pony than as a workhorse. You find yourself reaching for it for big tasks and small, and though it fails often, it feels just helpful enough that you can imagine lots of people soon coming to depend on it.

Only a few times in my life have I experienced this creeping sense of possibility with a new technology. The last time was the iPhone; the others were probably Google search and the internet itself. All these were groundbreaking at the start, but none of them changed anything overnight. Instead, what was most compelling was how easy it was to imagine them becoming more and more useful to more and more people. Five years after Apple unveiled the iPhone, there seemed to be an app for everything, and nearly half of American adultsowned a smartphone; five years after that, just over three-quarters did, and it was hard to think of anything smartphones hadn’t changed.

ChatGPT feels similarly big. It’s been less than five months since the artificial intelligence company OpenAI released its chatbot. ChatGPT is far from perfect; OpenAI continues to refer to it as a “research preview.” Still, as my colleagues at The Upshot documented recently, doctors, software engineers, fiction writers, stay-at-home parents and many others have already begun to rely on A.I. for important tasks.

These accounts echo my own experience. As I’ve come to learn what it can do and what it can’t, ChatGPT has earned a regular place in my workflow — and in my worries. I keep thinking of new tasks for it, of different ways it might alter my own job and the larger media industry, and of new ethical, legal and philosophical questions it raises for journalism and how people get the news.

Other tech-friendly journalists I know have been going through something similar: Suddenly, we’ve got something like a jetpack to strap to our work. Sure, the jetpack is kinda buggy. Yes, sometimes it crashes and burns. And the rules for its use aren’t clear, so you’ve got to be super careful with it. But sometimes it soars, shrinking tasks that would have taken hours down to mere minutes, sometimes minutes to seconds.

It will most likely take years of trial and error — maybe huge error — to figure out how it should fit into the profession. Steve Duenes, a deputy managing editor at The Times, told me that a working group in the newsroom is currently developing guidelines and exploring opportunities for the use of chatbots by journalists at the paper.

Even as we’re figuring all of this out, to me, this much already seems clear: Sooner rather than later, something like ChatGPT will become a regular part of many journalists’ tool kits.

Here are some ways I’ve been using it:

Wordfinding. One common worry about ChatGPT is that people will pass off its content as their own, but I don’t think that’s in the offing just yet. ChatGPT is a very clunky writer — its prose is dull and brims with cliché (“the human condition,” “humble beginnings,” “triumph over adversity,” barf).

Where it does really help, though, is in digging up that perfect word or phrase you’re having trouble summoning. In my jetpack metaphor up above, I’d originally written that when the jetpack is working, it “screams.” I knew “screams” wasn’t right; before ChatGPT I might have used a thesaurus or just pounded my head on the wall until the right word came to me. This time I just plugged the whole paragraph into ChatGPT and asked it for alternative verbs; “soars,” its top suggestion, was just the word that had been eluding me.

This may sound like a small win, but these things add up. I’ve spent many painful minutes of my life scouring my mind for the right word. ChatGPT is making that problem a thing of the past.

Getting unstuck. Nicholas Carlson, the global editor in chief of Insider, sent a memo to members of his staff last week, encouraging them to begin cautiously experimenting with ChatGPT. Carlson has been using the chatbot extensively, and told me he’s come to think of it as “a two-player word processor” that can help people overcome routine stumbling blocks in writing.

Take the problem of transitions — you’ve written two sections of an article and you’re struggling to write a paragraph taking the reader from one part to the other. Now you can plug both sections into ChatGPT and ask for its thoughts. ChatGPT’s proposed transition probably won’t be great, but even bad ideas can help in overcoming a block. “As a writer I like getting an idea from an editor to rewrite till it’s mine,” Carlson told me. ChatGPT functions as that editor — your always available, spitballing friend.

Summarizing. When big, complicated news stories break — a court ruling, an earnings report, a politician’s financial disclosure forms — editors and reporters often have to quickly determine the gist of the news to figure out how to cover it. ChatGPT excels at this sort of boiling down: Give it a long document and it will pull out big themes instantly and seemingly reliably.

Carlson used it this way when Donald Trump was indicted: He gave ChatGPT the charging documents and asked it for a 300-word summary. “I want a reporter to read the whole indictment and understand it extremely well,” he told me, but in the moment of breaking news, Carlson just wanted the big picture. “It did it, and it was helpful,” he said.

But wait a second. How could Carlson be sure that ChatGPT’s summary was accurate enough for him to rely on for deciding how to cover a story? More generally, how can any journalist be certain that anything ChatGPT says is reliable?

The short answer is: You can’t. ChatGPT and other chatbots are known to make stuff up or otherwise spew out incorrect information. They’re also black boxes. Not even ChatGPT’s creators fully know why it suggests some ideas over others, or which way its biases run, or the myriad other ways it may screw up.

Such problems call for great caution in its use — and as far as I can tell, publications are being cautious. (Carlson, too, has set up a working group to come up with guidelines for using ChatGPT at Insider.)

There are so many other ways I can imagine ChatGPT being used in the news business: An editor could call on it to generate headline ideas. An audio producer could ask it for interview questions for a podcast guest. A reporter approaching a new topic might ask it to suggest five experts to talk to, to get up to speed.

But some these could be quite problematic. If ChatGPT is involved in selecting the sources we talk to or the questions we ask, journalists’ work will at some level be influenced by this mysterious oracle whose biases and motivations we can’t see. (For now, don’t worry, I sought out Duenes and Carlson on my own, without ChatGPT’s help.)

Carlson floated one idea I liked: to think of ChatGPT as a semi-reliable source. “Trust it the same way you would trust a blabbermouth blowhard at a bar three drinks in who is pretending to know everything,” he suggested. You check everything that the source says — a lot of times it might be nonsense, but sometimes the blabbermouth turns out to know what he’s talking about.

This is how ChatGPT is changing my own industry. I imagine similar thorny questions are roiling many other professions. And there won’t be many easy answers.

Office Hours With Farhad Manjoo

Farhad wants to chat with readers on the phone. If you’re interested in talking to a New York Times columnist about anything that’s on your mind, please fill out this form. Farhad will select a few readers to call.“

No comments:

Post a Comment