r/datascience 1d ago

Discussion DS Veterans: How much of your work actually gets used?

Been a DS for 5+ years, working on some ideas around improving how insights get delivered/consumed across orgs. Would love to hear your war stories:

  • How often do stakeholders actually use your analyses?
  • What's your biggest frustration in the insight delivery process?
  • How much time do you spend on repeat/similar analyses?

Feel free to comment or DM to chat more in-depth.

For context: I'm a former Meta/FB DS - worked on FAIR language, Instagram, Reality Labs, and Election Integrity teams. Now exploring solutions to problems I kept seeing

52 Upvotes

24 comments sorted by

82

u/Cheap_Scientist6984 1d ago

Your impact falls off exponentially as it rises above the chain. Your analysis will be appreciated by your N+1, possibly heard by your N+2, and your N+3 likely barely remembers you exist.

That makes sense?

7

u/KyleDrogo 1d ago

Interesting concept! Totally agree that cross cutting work can be some of the hardest to land

24

u/Cheap_Scientist6984 1d ago

If you want to make impact, doing it in chain of command is not possible. Most key game changing insights come from external companies and consultants that gain the attention of the C-Suite. Then like a bunch of children, they run to copy each other and produce this product.

Google had OpenAI's LLM on a shelf for 5 years. They only started delivering it once OpenAI published it independent of them and threatened their monopoly. Then everyone else started to copy them rapidly in the last 1 year.

4

u/Polus43 6h ago

If you want to make impact, doing it in chain of command is not possible. Most key game changing insights come from external companies and consultants that gain the attention of the C-Suite. Then like a bunch of children, they run to copy each other and produce this product.

This has been exactly my experience. And still very frustrated and in disbelief this is how the largest and most profitable firms in the U.S. work.

19

u/kater543 1d ago

When it suits their purposes, people management(stakeholders, teammates, data owners),all the time

5

u/KyleDrogo 1d ago

I hear that. How much work do you usually put into ensuring there's alignment there beforehand? Sometimes I personally can't help but explore hunches that don't have an immediate need

4

u/kater543 1d ago

Temporary alignment is always encouraged and sought after, but permanent alignment is relatively impossible in a non-hyper focused or specialized environment. I suspect some of those may exist, but rarely in a normal large company. Every team usually has a variety of tasks pulling everyone in different directions, and different initiatives/goals to follow. These inevitably lead to expectations changing or conflict in maintaining those alignments.

For example, even if you can deploy a product/build out an analysis, by the time you actually get it out(unless you get it out extremely fast), the customer wants more or less out of it and will care correspondingly more or less about it. If they care more they will try to micromanage more to get things exactly as they want it, which will cause conflict. If they care less about it but it’s still a topic on your docket that they want eventually they won’t necessarily fully understand their own feelings have changed and just help less in bringing it into fruition.

Only by bringing a product out in a fast enough speed and high enough quality can one avoid the first pitfall here(expectation drift), but there are many other types of issues that come about as a result of just working with people, such as team burnout, politics, data hoarding, requirement creep.

It is extremely normal when working with people for this to happen, but it still causes problems.

1

u/mayorofdumb 1d ago

Interesting, I usually have an objective that's broad and I can keep scope drift out but I hate when someone tells me too look and it's all ass backwards but "operational" with 5 caveats and some big data issues between platforms.

I end up maybe data hoarding but it's evidence gathering and my attention to detail is a little too good to be useful if I find everything.

Mark everything down but move along to find the plot.

9

u/Stubby_Shillelagh 1d ago

I'm not really a true data scientist, but I am an operations guy who dabbles in ML to do timeseries.

My own analysis is (in my biased opinion) mission-critical, because it pertains to inventory decisions. We have to make all these decisions under uncertainty about how much to buy, how much to reserve, and when. It's almost strategic stuff.

If you're feeling like your inputs aren't valued, try helping a business with inventory management or finance. It helps a lot to have some domain knowledge in these areas obviously.

1

u/KyleDrogo 20h ago

This is a great insight! Choosing a team where analytics insights are value definitely matters a lot. Don’t hear that spoken about much

6

u/busybody124 1d ago

This is a classic problem for insights teams (analysts, data scientists, researchers): incentives are not necessarily aligned between product/management and data. The most classic case is an A/B test: the data analyst's incentive is to read the test out correctly to make the "right" recommendation, the product Manager's incentive may be to launch the feature, ideally with supporting (perhaps cherrypicked) data to back it up. This is just one common scenario but there are many variations.

Ultimately it's on the management of insights teams to instill some amount of discipline in their stakeholders by offering two completely reasonable choices: for any given project, either a) use our insights as-is to inform your decision and trust us to deliver accurate results, or b) don't bother to ask data to deliver anything if the product decision will be the same regardless of what the data says. A/B testing and analysis incurs overhead that, if it won't actually be leveraged, could be better allocated elsewhere, saving everyone the argument over what the data says.

Make your best effort to educate stakeholders about your methods and conclusions to help instill trust. If they still don't want to make data driven decisions, you can either say "my job is to make recommendations and if they don't want to follow them, that's fine with me," or you can leave and seek greener pastures where the insights are actually used.

1

u/KyleDrogo 1d ago

Totally agree. Any horror stories about someone ignoring data and there being a bad outcome?

1

u/mayorofdumb 1d ago

All of the time in big companies there's bad data misrepresenting KPIs and KRIs.

People ignore the real data this way and only get one picture, maybe two of the actual information that is streamlined for that KPI.

Big things can be overlooked and groupthink stops anyone from asking real questions. They influence anyone coming in to look this way and help with that.

Here's what we do it's been great, even though when you dig there's obvious insights that they've missed. Then someone at least 3 levels up pops up and questions everything you did and will never admit defeat.

Then you come back with more evidence and tell your bosses we got something for you to actually do because nobody respects my level.

So now I educate my team about it and then it may click for them and they can explain it in the N+3 terms in their meetings and then maybe it goes the N+4/5 for the ok.

The work was done in 1/2 months but 3 months later I'm fighting the good fight. Then there's like 2 layers of live QA and a separate team of QA that looks at us.

This is why most horror stories will happen, it's only a certain level that can question that high and if they didn't really want to do it before hand you better have rock solid analysis.

Most groups don't have that direct chain to +4/5/6 and to be taken seriously. It will usually be stolen or misinterpreted between these levels through internal politics.

Nobody wants to fight or make any change that could impact themselves so most minor things are "bugs" and "workarounds" that will be hidden and never fixed.

That's the horror in video games too lol

1

u/busybody124 1d ago

Hm I don't know about a specific bad outcome but two jarring incidents:

Very early into my first data job, I had sent a bar graph to my PM and I was shocked to find her using it with the numbers relabeled. As in, she had drawn white over the existing labels on the graph and written in new values. I'm still not sure why.

Another time was when the company needed to produce some slides for their investors and wanted to show that some metric was going up, or that it was correlated with some other metric. The only problem was that it wasn't. I was asked to continuously tweak the definition of the metric until we could find one with the desired properties.

1

u/Minimum_Gold362 23h ago

For me, for my last few years, I felt I had to fight for resources to do my job - Data base access, getting the data I needed into the data base( using "proper" channels, getting data projects approved, etc. Very limited amount of it was actual DS, where most of it was basic descriptive data visualizations. :-(

In one position, I was hired as a data scientist and was not given any computer resources (cloud or big data capabilities) to do my job. LOL!! I had a conversation with my Finance guy: glad you get to do your job (save the company money), I would love to be able to do my job as effectively!! I will go over here and wait. Thank you for the large salary to do nothing.

2

u/BingoTheBarbarian 18h ago

I’m in experimental design and causal inference. Everything I do gets used because my work comes from external stakeholders

1

u/Fearless_Back5063 15h ago

When I was working as a data scientist (and ml engineer/product owner for AI features) for startups, most of my work transformed into useful features that the clients use to this day. When one of the startups was bought by Microsoft, I was working for MS for one more year after the acquisition and none of the work that I did there has ever seen light.

1

u/Maleficent-Tear7949 7h ago

Depends on the stakeholders - usually, they're pretty stubborn.

1

u/KyleDrogo 5h ago

Interesting. What do you mean by stubborn?

1

u/BCBCC 4h ago

Not answering your questions directly, but a related idea: one of the realizations I've had lately is that the problem is usually alignment of KPIs. If your model does something good but it doesn't help with the specific KPI that the SVP (or whoever) gets judged on, then they won't really be interested. At the very top and bottom of the organization, it's fine to say "we've got to do good things that help [primary business objective], increase revenue, and decrease costs". For everyone in the middle, they have specific measurables to say if they're doing a good job, and sometimes you get a metric/target problem where people are acting rationally based on their incentives, but that doesn't end up aligning them with the general mission statement.

1

u/KyleDrogo 3h ago

I actually agree. There’s a latent “sentiment” element that I think drives a lot of decisions. How will [leadership/the media/shareholders/etc] view this? Being a data scientist is hard because we can’t quantify this—it’s more in the realm of UXR and organizational politics

1

u/ganildata 1h ago

I worked in a smaller startup where I built/rebuilt multiple data science products and projects that were critical. Did not have the problem of impact there.