In the previous post here (Weekly unique browsers) it was made evident that data quality notably improved when using a 1st party cookie for tracking unique browsers.
Naturally the quality and retention rate of the weekly new cookies becomes interesting to analyze.
Looking at the previously used example site, with +500k unique browser per week, a large number of browsers are new browsers due to the creation of new cookies.
By studying the tracking data on the cookie creation week it becomes quite clear that typically the week after a cookies creation, roughly 10% of the cookies will show up in the data.
In the following weeks after the creation of the cookie the presence of these new cookies drops to a 7% to 4% return ratio week on week.
The point is that the 1st party cookie is in no way going to secure a fully accurate unique identifier, and understanding the cookie churn rate is crucial for any "unique" browser presentation of numbers. The gap between the total unique and the returning (i.e. browsers = cookies) is essential in any numerical presenation. The bigger the gap the larger the uncertainty in the numbers on the unique browsers.
Within 3 weeks after the creation of a cookie, the returning ratio of the new cookies levels out evidently.
If using a calculated waterline to get a more reliable unique browser count doesn't appeal to you, then perhaps using the sliding week method might be an option for accuracy.
This calculation method is based on the assumption that by adding the returning unique browsers of week X + the new unique browsers of that week, but found in the week after, the cookie deletion inaccuracy can be decimated.
No matter how you spin it, presenting unique browser count without showing the returning browser ratio is an exposure of fuzzy luke warm numbers.
The question that begs to be asked is why this seems to be the norm. The answer is quite evident, bigger is better.