Another interesting one with many reasons for debate here.
Every time, I bring this up, I have many hands at my customers asking why they shouldn’t be allowed to do the analysis against their entire dataset. And, I always take the example of finding the needle in the “proverbial haystack”. You will always need a strategy to take a chunk of data at a time. You will always need to make a hypothesis and prove it wrong, by taking chunks of data and doing the same analysis against it over and over again.
Now, you can use automated methods of analysis including some data science techniques, but at the end of it, you will always need to understand what is a good chunk of data to slice at a time.
Use the same logic to decide what’s worth analyzing for a particular dashboard. All the performance strategies over the last 2 decades have been many versions of the same thing.
Pre-aggregates, cubes, caches, reflections, lenses, etc.
At the end of the day, you are doing something to shrink the data to “useful set of data” and I ask you to utilize your common sense here.
- At times, this will mean creating separate extracts from the same data source for different purposes
- At times, this will mean using smart filters
- At times, this means limiting the rows of data in your extract
- At times, this means utilizing Prep to create an extract with pre-aggregated fields inside
Hope you get the idea, if not, shoot me a Q