Reality is that it is far more complex than some make it when "dark social" gets dropped left and right. It actually requires a few different reports to pinpoint things, just one view/report of the data simply does not cut it.
Example, the previous article on this site had on the 1st publish day zero engagements.
True? From the Twitter view point yes (according to their measurements). In reality? A check is required to verify outliers (i.e. anomaly detection).
An initial check makes it evident that there are no referrals from the Twitter domain(s), make sure to remember checking for t.co which is used for the URL shortning on Twitter.
The bundle of "unknown" referrals has a too large browser count, given that this site has very few random stray direct visitors every week, this share of 66,67% is just too much based on a firm grasp of visitor volumes. The reason is "referral blindness".
If there isn't any referral details then one needs to look elsewhere in order to get any clarity. Two of the best places to search for data is 1) the page URL (i.e. where did the measurement come from) and 2) the user agent.
The user agent is less reliable that the page URL, reason is if the code executes properly and the data isn't manipulated then it is present. As for the user agent, that sometimes suffers from some rather serious monkey business manipulation. Using the page URL requires that a dose of planning has been applied, namely that parameters are present in the page URL to shed some light.
Since the previous article (about binge viewing and reading) was planned to be a vehicle for data collection for this article, not very surprisingly there was the simplest form of campaign tracking enabler in the page URLs of the distributed links to make sure a bit more visibility would be available.
The "T" after the question mark indicates that the link was originally placed on Twitter (E as in email, FB = Facebook, and LI = LinkedIn), this proves that there were indeed visitors generated thru the use of the Twitter link.
While most variants of the page URL make sense, a few don't. The page URL without ? points to 2 infrequent visitors who came to the website and loaded the page.
Then there is the case of strippers where the campaign parameter isn't present due to a bit of cut'n'paste or page URL manipulation.
However concerning the missing visitors from Twitter, it seems to be a case of the classic "right click and open in a new window" that caused the referrer info to not be present.
To wrap things up, social tracking isn't all that dark once some light is applied on it. With very little effort breaking out another report item to see where the strippers crawl out from puts the spot light on the browsers causing it.
By simply filtering out the various link variants per browser + adding a bit of slice'n'dice the stripper culprits (red box) turn out to be Chrome users.
One interesting find is where the Twitter link is used by Facebook app users, this is a clear indication of visitors sharing the Twitter link on Facebook (green box).
It is worth noting that any user agent with either Facebook or Twitter strings embedded means one browser less counted in the direct entry bin.
Using a few basic web analytics functions reducing the "Dark social" volume is possible.
In reality Twitter sent 4 browsers on the day of article being published (green & pink box), not zero. Result: 2 of 39 browsers (~5,13%) had unknown referral a.k.a direct entry in reality. A quick investigation = a huge accuracy improvement, if your web analytics tool & data allows it.
Always validate what the other tool (in this case the Twitter analytics) is reporting compared to your own data. It might be quite inaccurate or biased due to not having the full picture. Without proper validation the number can very well be 42.