Earlier this year, when Tesla CEO Elon Musk tweeted about an upcoming product launch, the market paid attention. That tweet—which divulged only that the new product was not a car and would be unveiled April 30—sent Tesla’s shares soaring and boosted the electric carmaker’s market capitalization by nearly a billion dollars. (Tesla later revealed the new product to be a battery to power homes.) But not all dispatches from the Twitterverse carry the same weight as one from a bold-faced name such as Musk. While the use of social media to inform trading decisions has caught fire in recent years, the fact of the matter is that investors scouring new social platforms are still sifting through more useless information than anything even remotely resembling a market-moving message. Sophisticated investors don’t want information from just “anyone and everyone,” said Social Alpha CEO Prem Melville, speaking at a recent Credit Suisse panel discussion on crowd-sourcing market intelligence. Melville, whose firm combs Twitter and news sources for content relevant to investors, says his clients “value information from reliable, high value sources. They’re turned off by the flood of messages by the clueless guy in his basement tweeting about his portfolio.
Topics:
Alice Gomstyn considers the following as important: Investing: Features
This could be interesting, too:
Alice Gomstyn writes Debunking the Drug Pricing Scare
Ashley Kindergan writes What Are Activist Investors Looking For?
Alice Gomstyn writes Emerging Equities Outshine Developed Markets
Ashley Kindergan writes Market Impact of a Trump Presidential Win
Earlier this year, when Tesla CEO Elon Musk tweeted about an upcoming product launch, the market paid attention. That tweet—which divulged only that the new product was not a car and would be unveiled April 30—sent Tesla’s shares soaring and boosted the electric carmaker’s market capitalization by nearly a billion dollars. (Tesla later revealed the new product to be a battery to power homes.) But not all dispatches from the Twitterverse carry the same weight as one from a bold-faced name such as Musk. While the use of social media to inform trading decisions has caught fire in recent years, the fact of the matter is that investors scouring new social platforms are still sifting through more useless information than anything even remotely resembling a market-moving message.
Sophisticated investors don’t want information from just “anyone and everyone,” said Social Alpha CEO Prem Melville, speaking at a recent Credit Suisse panel discussion on crowd-sourcing market intelligence. Melville, whose firm combs Twitter and news sources for content relevant to investors, says his clients “value information from reliable, high value sources. They’re turned off by the flood of messages by the clueless guy in his basement tweeting about his portfolio.”
Melville and his industry peers say they have come a long way in putting the tools in place to weed out valuable information from the noise. Social Alpha, for instance, has developed a machine-learning model to score every Twitter user’s ability to influence others and ultimately move markets. Called the Social Alpha Influence Score, it helps the firm distinguish reliable sources from purveyors of useless chatter. Melville says his firm has also “taken a sledgehammer” to spam accounts—including those that elude Twitter’s own spam detection efforts—with algorithms that can spot spammers with 99.7 percent accuracy.
Dataminr, which focuses on detecting news events through Twitter, uses geolocation as an important tool. Kaylash Patel, head of client services, says that while only 2 percent of Twitter users have their geolocation options switched on, Dataminr can still determine locations for about 60 percent of Twitter users by analyzing their social media networks. When Tweets begin coming in about a possible news event happening in a certain location, says Patel, “you can then geolocate other people around who may or may not be talking about the same thing. We’re then able to use Twitter as a self-correcting mechanism to figure out if something is actually happening or not.”
When analyzing tweets, messages from users such as government officials and journalists usually require less verification than those from “amateur” users. But there’s still a good degree of sifting that happens even with those tweets, who share information that isn’t always relevant to Dataminr’s clients. “It’s not guaranteed that a tweet from a reliable source will make it through the algorithm as an important event,” says Patel. “Is the President tweeting about Thanksgiving turkey ceremonies important to a global macro hedge fund?”
Companies like Dataminr also have to contend with cases of reliable sources being hacked. Perhaps the most infamous example is the infiltration of an Associated Press account in April 2013, when hackers sent a bogus tweet reporting that explosions at the White House had injured President Obama. The “news” created a short-lived frenzy in the markets, with the Dow Jones Industrial Average plummeting more than 140 points before recovering a few minutes later. Dataminr sent word to clients about the initial tweet but, shortly thereafter, notified them that the alert was probably fake—a conclusion the firm reached after finding a tweet from a credible White House source dismissing the news. At that point, the AP had yet to announce the hack. Dataminr’s clients, says Patel, “were able to take part in upside while people realized what was going on.”
Specialized crowd-sourcing platforms can also benefit from rigorous intelligence vetting. At Estimize, an online platform for gathering earnings and revenue predictions, those contributing forecasts include buy side and sell side analysts, other financial professionals, industry experts and academics. The firm’s CEO, Leigh Drogen, says that though Estimize contributors are a self-selecting, well-informed bunch, they, too, can occasionally provide unreliable information, and for reasons as mundane as a misplaced decimal point. These mistakes are hopefully flagged by a so-called reliability algorithm and later manually reviewed by Estimize staff. About 2 percent of contributed estimates, Drogen says, are flagged for review and a quarter of those are ultimately excluded from Estimize’s consensus forecasts.
Once errant information is stripped out, there’s still more sorting to be done: not all Estimize users’ contributions are given the same weight when the firm compiles consensus forecasts. Instead, each user’s contribution is weighted according to a confidence score, which is determined by a multivariate regression model based on 60 different factors, including how old the user’s latest estimate is and the accuracy of the user’s prior forecasts.
The resulting consensus forecasts, says Drogen, are more accurate than Wall Street’s consensus estimates about 70 percent of the time. “The basic premise is that the wider distribution you have, the better data set you’ll end up with,” says Drogen, “but you still have to make sure that data set is clean.”