Friday, September 29, 2017

Uber: We don’t have to pay drivers based on rider fares


Contracts allow rider fares to be higher than what is known and paid to drivers.

DAVID KRAVETS - 9/18/2017, 4:30 PM

Uber is fighting a proposed class-action lawsuit that says it secretly over charges riders and under pays drivers. In its defense, the ride-hailing service claims that nobody is being defrauded in its "upfront" rider fare pricing model.

The fares charged to riders don't have to match up with the fares paid to drivers, Uber said, because that's what a driver's "agreement" allows.

"Plaintiff's allegations are premised on the notion that, once Uber implemented Upfront Pricing for riders, it was required under the terms of the Agreement to change how the Fare was calculated for Drivers," Uber said (PDF) in a recent court filing seeking to have the class-action tossed. "This conclusion rests on a misinterpretation of the Agreement."

The suit claims that, when a rider uses Uber's app to hail a ride, the fare the app immediately shows the passenger is based on a slower and longer route compared to the one displayed to the driver. The rider pays the higher fee, and the driver's commission is paid from the cheaper, faster route, according to the lawsuit.

Uber claims the disparity between rider and driver fares "was hardly a secret."

"Drivers," Uber told a federal judge, "could have simply asked a User how much he or she paid for the trip to learn of any discrepancy."

A contract is a contract

Uber doesn't consider its drivers employees, and it doesn't call their pay "commissions." Instead, it allows drivers to keep the fare presented to them in the Uber driver app, even if the fare is different from what the rider was charged. The driver then pays Uber a "service fee"—a percentage of the fare earned by the driver.

The San Francisco-based ride-hailing service also claims that it took "significant risk" under this "upfront" fare pricing model, which began last year.
Plaintiff further alleges that, after Upfront Pricing began, Drivers continued to earn based on the trip’s distance and the amount of time it actually took to complete the trip. Plaintiff claims the Upfront Price is often higher than the Fare, which is the basis of what is remitted to him. He neglects to mention, however, the significant risk placed on Uber, not Drivers, by Upfront Pricing: the User’s Upfront Price may just as easily disadvantage Uber, for example, where an actual trip takes longer than expected, yet the Driver’s earnings calculation remains constant.
What's more, a rider might also pay Uber more than what the driver's fare is based on because a driver's contract allows Uber to "adjust" the fare known and paid to the driver, according to Uber's legal filing.

"The Agreement allows Uber to adjust the Fare under various circumstances. For example, Uber is permitted to make changes to the Fare Calculation based on local market factors," Uber said in its federal court response. "Likewise, Uber may adjust the Fare based on other factors such as inefficient routes, technical errors, or customer complaints."

And here's the kicker:
Drivers disclaim any right to receive amounts over and above the Fare produced by the Fare Calculation.
The suit, which seeks class-action status, demands back pay and legal fees. It wants a Los Angeles federal judge to halt the alleged "unlawful, deceptive, fraudulent, and unfair business practices."

A hearing is set for December 1.

---

DAVID KRAVETS The senior editor for Ars Technica. Founder of TYDN fake news site. Technologist. Political scientist. Humorist. Dad of two boys. Been doing journalism for so long I remember manual typewriters with real paper. EMAIL david.kravets@arstechnica.com // TWITTER @dmkravets

Friday, September 15, 2017

Fake News: What Laws Are Designed to Protect


by Jane Haskins, Esq. Freelance writer

Just a few years ago, “fake news" was something you'd find in supermarket tabloids.

Now, though, the line between “fake news" and “real news" can seem awfully blurry. “Fake news" has been blamed for everything from swaying the U.S. presidential election to prompting a man to open fire at a Washington, DC pizza parlor.

What's Fake and What's Real?

A “real news" outlet, such as a major newspaper or television network, might make mistakes, but it doesn't distribute false information on purpose. Reporters and editors who report real news have a code of ethics that includes using reputable sources, checking facts, and getting comments from people on both sides of an issue.

Fake news outlets, on the other hand, are designed to deceive. They might have URLs that sound like legitimate news organizations, and they might even copy other news sites' design. They may invent “news" stories or republish stories from other internet sources without checking to see if they are true. Their purpose is usually to get “clicks" and generate ad revenue or to promote their owners' political viewpoint.

Some “fake news" is published on satire sites that are usually clearly labeled as satire. However, when people share articles without reading beyond the headline, a story that was supposed to be a parody can end up being taken as the truth.

Can't the Legal System Punish Fake News?

The First Amendment protects Americans' rights to freely exchange ideas—even false or controversial ones. If the government passed laws outlawing fake news, that would be censorship that would also have a chilling effect on real news that people disagree with.

The main legal recourse against fake news is a defamation lawsuit. You can sue someone for defamation if they published a false fact about you and you suffered some sort of damage as a result—such as a lost job, a decline in revenue, or a tarnished reputation. If you are an ordinary, private person, you also must show that the news outlet was negligent (or careless).

But most fake news relates to public figures, who can only win a defamation lawsuit by showing that the news outlet acted with “actual malice." This means that the author must have known the story was false or must have had a “reckless disregard" for whether it was true or not. It's usually a difficult standard to meet, but defamation suits may become more common as concern about fake news grows.

For example, Chobani yogurt recently filed a defamation suit against conspiracy theorist Alex Jones and his site, Infowars, over a video and tweet headlined “Idaho Yogurt Maker Caught Importing Migrant Rapists." Jones' tweet led to a boycott of the popular yogurt brand.

Defamation liability isn't limited to the person who first published a fake story—it extends to anyone who republishes it on a website or blog. Melania Trump, for example, recently settled defamation lawsuits against a Maryland blogger, who published an article in August 2016, and the online Daily Mail that published a similar false article later that month.

How to Spot Fake News and Stop it from Spreading

Fake news can be hard to identify, with some fake news sites looking and sounding almost exactly like well-known media outlets. Here are some tips for figuring out what's fake and what's real:
  • Read beyond the headline. The article may be labeled as a parody or it may just sound too outlandish to be true.
  • Check the story out on Snopes.com, which has been researching rumors and false stories for two decades. For political news, try FactCheck.org.
  • See if the story comes from one of these fake news websites identified by PolitiFact.com as part of a collaborative effort with Facebook.
  • Fact check the story yourself. Do an online search to confirm the main facts in the story, click on any links provided, and read the sources. Also look for any reports identifying the site as a fake news site, and/or look up the author's bio online.
In the end, the law can't protect you from fake news. Get your news from sources that you know are reputable, do your research, and read beyond the headlines. And, if you find out an article is fake, don't share it. That's the surest way to stop a false story from spreading.

Thursday, September 14, 2017

Fake News 101? Lawmakers want California schools to teach students how to evaluate what they read on the web


By Melanie Mason - Los Angeles Times (01/11/2017)

Politicians and members of the media are increasingly bemoaning the rise of "fake news," though rarely is there agreement on how to define it. But can this new phenomenon be legislated away?

Two separate bills introduced by Democratic lawmakers Wednesday aim to do just that by offering proposals that would help teach Californians to think more critically about the news they read online.

Assemblyman Jimmy Gomez (D-Los Angeles) has introduced a measure that would require the state to develop curriculum standards that incorporate "civic online reasoning" to teach students how to evaluate news they read on the Internet.

"Recently, we have seen the corrupting effects of a deliberate propaganda campaign driven by fake news," Gomez said in a statement. "When fake news is repeated, it becomes difficult for the public to discern what's real. These attempts to mislead readers pose a direct threat to our democracy."

Gomez said his bill, AB 155, would prepare California students to differentiate "between news intended to inform and fake news intended to mislead."

In a similar measure, SB 135 by state Sen. Bill Dodd (D-Napa), the state education board would be tasked with creating a framework for a "media literacy" curriculum.

“The rise of fake and misleading news is deeply concerning. Even more concerning is the lack of education provided to ensure people can distinguish what is fact and what’s not,” Dodd said in a statement.

The fake news phenomenon burst into public consciousness at the close of the 2016 election, when analysts found that factually inaccurate news stories found surprisingly large audiences online.

But defining "fake news" has increasingly become a thorny exercise, as partisans have used the phrase to disparage news stories they dislike.

Both President Obama and President-elect Donald Trump have referenced the fake news phenomenon. Obama, in remarks after the election, forcefully lamented the "age of misinformation [that is] packaged very well, and it looks the same when you see it on a Facebook page or you turn on your television."

On Wednesday, Trump dismissed news coverage of a report detailing unverified allegations of his supposed ties to Russia as "fake news," and singled out the outlets BuzzFeed and CNN as purveyors of false information.

Op-Ed Google and Facebook aren't fighting fake news with the right weapons

The Google logo as seen on March 23, 2010. (Virginia Mayo / Associated Press)
By Matthew A. Baum and David Lazer (05/08/2017) - Los Angeles Times

We know a lot about fake news. It’s an old problem. Academics have been studying it — and how to combat it — for decades. In 1925, Harper’s Magazine published “Fake News and the Public,” calling its spread via new communication technologies “a source of unprecedented danger.”

That danger has only increased. Some of the most shared “news stories” from the 2016 U.S. election — such as Hillary Clinton selling weapons to Islamic State or the pope endorsing Donald Trump for president — were simply made up.

Unfortunately — as a conference we recently convened at Harvard revealed — the solutions Google, Facebook and other tech giants and media companies are pursuing aren’t in many instances the ones social scientists and computer scientists are convinced will work.

We know, for example, that the more you’re exposed to things that aren’t true, the more likely you are to eventually accept them as true. As recent studies led by psychologist Gordon Pennycook, political scientist Adam Berinsky and others have shown, over time people tend to forget where or how they found out about a news story. When they encounter it again, it is familiar from the prior exposure, and so they are more likely to accept it as true. It doesn’t matter if from the start it was labeled as fake news or unreliable — repetition is what counts.

Reducing acceptance of fake news thus means making it less familiar. Editors, producers, distributors and aggregators need to stop repeating these stories, especially in their headlines. For example, a fact-check story about “birtherism” should lead by debunking the myth, not restating it. This flies in the face of a lot of traditional journalistic practice.

The online Washington Post regularly features “Fact Checker” headlines consisting of claims to be evaluated, with a “Pinocchio Test” appearing at the end of the accompanying story. The problem is that readers are more likely to notice and remember the claim than the conclusion.

Another thing we know is that shocking claims stick in your memory. A long-standing body of research shows that people are more likely to attend to and later recall a sensational or negative headline, even if a fact checker flags it as suspect. Fake news stories nearly always feature alarming claims designed to grab the attention of Web surfers. Fact checkers can’t compete — especially if their findings are writ small.

To persuade people that fake news is fake, the messenger is as important as the message. When it comes to correcting falsehoods, a fellow partisan is often more persuasive than a neutral third party. For instance, Trump is arguably the individual most closely associated with birtherism. But in September 2016, Trump publicly announced that Obama was a native-born American, “period.” Polling a few days later showed an 18-percentage point drop among registered Republicans in acceptance of the birther myth. Countless debunking stories by fact checkers had far less impact.

The Internet platforms have perhaps the most important role in the fight against fake news. They need to move suspect news stories farther down the lists of items returned through search engines or social media feeds. The key to evaluating credibility, and story placement, is to focus not on individual items but on the cumulative stream of content from a given website. Evaluating individual stories is simply too slow to reliably stem their spread.

Google recently announced some promising steps in this direction. It was responding to criticism that its search algorithm had elevated to front-page status some stories featuring Holocaust denial and false information about the 2016 election. But more remains to be done. Holocaust denial is, after all, low-hanging fruit, relatively easily flagged. Yet even here Google’s initial efforts produced at best mixed results, initially shifting the denial site downward, then ceasing to work reliably, before ultimately eliminating the site from search results.

The platforms must also wrestle more seriously with how to evade manipulation. Recent research led by computer scientist Filippo Menczer highlights the synchronized push of fake news by millions of bots on social media and has developed new ways of detecting them. In a white paper released last month, Facebook claims that its top priority is making sure accounts are owned by real people. Yet its visible efforts to date to purge fake accounts — most notably 30,000 in France ahead of that nation’s presidential election — seem small relative to the scale of the problem. By its own estimates, Facebook may have as many as 138 million duplicate or false accounts.

Finally, the public must hold Facebook, Google and other platforms to account for their choices. It is almost impossible to assess how real or effective their anti-fake news efforts are because the platforms control the data necessary for such evaluations. Independent researchers must have access to these data in a way that protects user privacy but helps us all figure out what is or is not working in the fight against misinformation.

For all the talk about fake news, the truth is that we know a lot about why people read, believe, and share things that aren’t true. Now we just need the big technology platforms and media companies to take the truth to heart.
The key to evaluating credibility, and story placement, is to focus not on individual items but on the cumulative stream of content from a given website.
Matthew A. Baum is the Marvin Kalb professor of global communication and professor of public policy at Harvard University’s John F. Kennedy School of Government. David Lazer is a distinguished professor in political science and computer and information science at Northeastern University and visiting scholar at the Institute for Quantitative Social Science at Harvard.

Why is toilet paper vanishing from supermarkets?

FOX Business FOX BUSINESS - You might notice something unusual, not to mention unfortunate, next time you try to stock up on bathroo...