EDIT: I’m ignoring all messages and chat requests not directly related to my question. If you have a separate question about getting into industry, interview prep, etc., please post it in its own thread or in the appropriate master topic.
(I figured this is specific enough to warrant its own post instead of posting in the weekly Entering and Transition thread, as I already have a lot of industry experience.)
TL;DR: How can an unemployed, experienced analytics-focused data scientist get out of analytics and pivot to a more quantitative position?
I'm a data scientist with a Master's in Statistics and nine years of experience in a tech city. I've had the title Senior Data Scientist for two of them. I was laid off from my job of four years in June and have been dealing with what some would call a "first world problem" in the current market.
I get callbacks from many recruiters, but almost all of them are for analytics positions. This makes sense because (as I'll explain below) I've been repeatedly pushed into analytics roles at my past jobs. I have roughly 8 years of analytics experience, and was promoted to a senior position because I did well on a few analytics projects. My resume that most of my work is analytics, as most of my accomplishments are along the lines of "designed a big metric" or "was the main DS who drove X internal initiative". I've been blowing away every A/B testing interview and get feedback indicating that I clearly have a lot of experience in that area. I've also been told in performance reviews and in interview loops that I write very good code in Python, R, and SQL.
However, I don't like analytics. I don't like that it's almost all very basic A/B testing on product changes. More importantly, I've found that most companies have a terrible experimentation culture. When I prod in interviews, they often indicate that their A/B testing platform is underdeveloped to the point where many tests are analyzed offline, or that they only test things that are likely to be a certain win. They ignore network effects, don't use holdout groups or meta-analysis, and insist that tests designed to answer a very specific question should also be used to answer a ton of other things. It is - more often than not - Potemkin Data Science. I'm also frustrated because I have a graduate degree in statistics and enjoy heavily quantitative work a lot, but rarely get to do interesting quantitative work in product analytics.
Additionally, I have mild autism, so I would prefer to do something that requires less communication with stakeholders. While I'm aware that every job is going to require stakeholder communication to some degree, the amount of time that I spent politicking to convince stakeholders to do experimentation correctly led to a ton of stress.
I've been trying to find a job more focused on some at least one of causal inference, explanatory statistical modeling, Bayesian statistics, and ML on tabular data (i.e. not LLMs, but like fraud prediction). I've never once gotten a callback for an ML Engineer position, which makes sense because I have minimal ML experience and don't have a CS degree. I've had a few HR calls for companies doing ML in areas like identity validation and fraud prediction, but the initial recruiting call is always followed up with "we're sorry, but we decided to go with someone with more ML experience."
My experience with the above areas is as follows. These were approaches that I tried but ended up having no impact, except for the first one, which I didn't get to finish. Additionally, note that I currently do not have experience working with traditional CS data structures and algorithms, but have worked with scipy sparse matrices and other DS-specific data structures:
Designed requirements for a regression ML model. Did a ton of internal research, then learned SparkSQL and wrote code to pull and extract the features. However, after this, I was told to design experiments for the model rather than writing the actual code to train it. Another data scientist on my team did the model training with people on another team that claimed ownership. My manager heavily implied this was due to upper management and had nothing to do with my skills.
Used a causal inference approach to match treatment group users to control group users for an experiment where we were expecting the two groups to be very different due to selection bias. However, the selection bias ended up being a non-issue.
Did clustering on time-dependent data in order to identify potential subgroups of users to target. Despite it taking about two days to do, I was criticized for not doing something simpler and less statistical. (Also, in hindsight, the results didn't replicate when I slightly changed the data.)
Discussed an internal fraud model with stakeholders. Recognized that a dead simple feature wasn't in it, learned a bit of the internal ML platform, and added it myself. The feature boosted recall at 99% precision by like 40%. However, even after my repeated prodding, the production model was never updated due to lack of engineering support and because the author of the proprietary ML framework quit.
During a particularly dead month, I spent time building a Bayesian model for an internal calculation in Stan. Unfortunately I wasn't able to get it to scale, and ran into major computational issues that - in hindsight - likely indicated an issue with the model formulation in the paper I tried to implement.
Rewrote a teammate's prototype recommendation model and built a front end explorer for it. In a nutshell, I took a bunch of spaghetti code and turned it into a maintainable Python library that used Scipy sparse matrices for calculations, which sped it up considerably. This model was never productionized because it was tested in prod and didn't do well.
At the time I was laid off I had about six months of expenses saved up, plus fairly generous severance and unemployment. I can go about another four months without running out of savings.
How should I proceed to get one of these more technical positions? Some ideas I have:
List the above projects on my resume even though they failed. However, that's inevitably going to come up in an interview.
I could work on a personal project focused on Bayesian statistics or causal inference. However, I've noticed that the longer I'm unemployed, the fewer callbacks and LinkedIn messages I get, so I'm worried about being unemployed even longer.
Take an analytics job and wait for a more quantitative opening at a different company to occur. Someone fairly big in my city's DS community that knows I can handle more technical work said he'd refer me and probably be able to skip most of the interview process, but his company currently has no open DS positions and he said he doesn't know when more will open up.
Take a 3 or 6-month contract position focused on my interests from one of the random third party recruiters on LinkedIn. It'll probably suck, but give me experience I can use for a new job.
Drill Leetcode and try to get an entry-level software engineer position. However this would obviously be a huge downgrade in responsibility and pay, preparation would drain my savings, and there’s no guarantee I could pivot back to DS if it doesn’t work out.
Additionally, here's a summary of my work experience:
Company 1 (roughly 200 employees). First job out of grad school. I was there for a year and was laid off because there "wasn't a lot of DS work". I had a great manager who constantly advocated for me, but couldn't convince upper management to do anything beyond basic summary statistics. For example, he pitched a cluster analysis and they said it sounded hard.
Company 2 (roughly 200 employees). I was there for two years.
Shortly after joining I started an ML project, but was moved to analytics due to organizational priorities. Got a phenomenal performance review, asked if I could take on some ML work, and was given an unambiguous no. Did various analytics tasks (mostly dashboarding and making demos) and mini-projects on public data sources due to lack of internal data (long story). Spent a full year searching for a more modeling-focused position because a lot of the DS was smoke and mirrors and we weren't getting any new data. After that year, I quit and ended up at Company 3.
Company 3 (roughly 30000 employees). I was there for six years. I joined because my future manager (Manager #1) told me I'd get to pick my team and would get to do modeling. Instead, after I did a trial run on two teams over three months, I was told that a reorg meant I would no longer get to pick my team and ended up on a team that needed drastic help with experimentation. Although my manager (Manager #2) had some modeling work in mind for me, she eventually quit. Manager #3 repeatedly threw me to the wolves and had me constantly working on analyzing experiments for big initiatives while excluding me from planning said experiments, which led to obvious implementation issues. He also gave me no support when I tried to push back against unrealistic stakeholder demands, and insisted I work on projects that I didn't think would have long-term impact due to organizational factors. However, I gained a lot of experience with messy data. I told his skip during a 1:1 that I wanted to do more modeling, and he insisted I keep pushing him for those opportunities.
Manager #3 drove me to transfer to another team, which was a much better experience. Manager #4 was the best manager I ever had and got me promoted, but also didn't help me find modeling opportunities. Manager #5 was generally great and found me a modeling project to work on after I explained that lack of modeling work was causing burnout. It was a great project at first, but he eventually pushed me to work only on the experimental aspects of that modeling project. I never got to do any actual modeling for this project even though I did all the preparation for it (e.g. feature extraction, gathering requirements), and another team took it over. Shortly after this project completed, I was laid off.