Finding "Truth" in Data

0

In Part 3 of our webinar series "Top 5 Pitfalls to Avoid When Building a Data Science Team" we interviewed Randi Ludwig, Data Scientist at Dell EMC, and Amarita (Amar) Natt, Managing Director at Econ One Research about how to avoid bias and inadequate model validation.  What followed was a very insightful discussion with a diversity of advice and perspectives given by these two experienced panelists.  It's well worth watching the video recap to hear all that these #womenintech had to say about this subject. 

 

 

Not a lot of time and just want the highlights? Below is a summary of the questions discussed and the panelist answers.

How do you approach the process from the conception of a project all the way to having a model in production to reduce bias and to produce models in which you have confidence?

Randi described that she thinks of the Data Science process as being broken into 5 stages:

1) Pulling data (a degree of data engineering)

2) Curating and cleaning the data

3) Data analysis to understand what data might be useful to your model building

4) Building the model and preparing it to run in production 

5) Monitoring and maintaining the model once it’s out there

She highlighted that the first three stages all have lots of opportunity to look for bias, noting that you can look for bias in incoming data, as well as the features that you look to create.  She added that testing data on highly curated “best case scenario” data won’t necessarily reflect how your model is going to perform in the real world.  Randi advised that you have to ask “what is it that you are actually validating against?”

Amar added that as an economist doing a lot of causal inference, her team spends a lot of time looking at the drivers of demand and data, asking questions such as “What are we actually measuring and what are we proxying for? When I interpret this coefficient and this driver, is it really what I think it is?"  She noted that because her team works with a lot of personal data that is inherently biased, this kind of line of questioning is needed to make sure her team is not integrating that bias into their models.

Amar added that it’s not just through data that you can introduce bias. Bias can come from poor modeling techniques, poor validation and overfitting.

What are some of the implications you’ve seen from model bias in terms of the overall business impact?

Amar explained that many of her colleagues are expert witnesses for litigation around major business issues such as anti-trust or class action lawsuits. She has seen court cases fall apart because of a bad model. She has also had issues when aggregating her user-level data up and segmenting customers. Segmenting customers the wrong way can badly skew data. She added that for this reason, she brings her models to a colleague who is not working on her project for an audit.  

 

“You don’t want your model to be based on trying to find the results that your client wants.”

- Amar Natt

In pre-processing and prepping data, what have you seen that causes issues and what do you do when some of the source data is causing issues?

Amar noted that tunnel vision is a big problem, explaining that she has seen very bright data scientists working only with the data that a client has given them.  She noted that in today’s world there is an abundance of external data that can be accessed and which should be used to build more accurate models.  Nevertheless she sees people get mentally locked into using only the data that a client has given them.

Randi added that in traditional companies like Dell with legacy systems, in order to counteract the bias that you already know is there, you should first build something useful with the data from those legacy systems. Then you should use the Data Science system that you put in place to start collecting new, better data going forward. In essence, you should identify the holes in your data in the process of building a data science product, and then use that product to fill the data holes going forward.

What types of processes and people do you put in place in the overall model building process to avoid some of the bias and validation challenges or to counter some of the risks?

Amar emphasized that she finds it incredibly valuable to have an internal audit process. She has a back up audit team that is not working on her project, usually consisting of another economist and an analyst. These two auditors check her thought processes and code, among other things.  The reason for doing this is that she avoids the problem where a person with knowledge of the project will make assumptions and won’t question certain pieces of the code or the thought processes that they probably should, because they’re just following her train of thought.

Amar added that she would much prefer that a colleague pokes holes in her models than a client or opposing counsel in the courtroom.

“I think that the key skill here is being open to being wrong. I let them sort of audit my thought process.”

- Amar Natt

Randi likened the process to an academic peer review.

"You want the best ideas to come out and the best methodologies to actually come to fruition. 

And in order to do that you need iterations."

- Randi Ludwig

Do you think that a dedicated role around model validation and testing in general might evolve in your organizations (similar to the role in software development)?

Randi described that the approach at Dell has been to have testers be people who already have the skill set and are on the Data Science team, rather than having specific roles for testing models. You don’t necessarily need someone testing your model every week and it would be hard for a single person to jump between so many different projects.

What have you seen in terms of issues that arise at the point of getting it into production? Have you seen anything arise that late in the model building process?

Randi described how her team has found gaps and holes in models at the very end, particularly in cases where there was low volume data or slices of granularity which they hadn’t realized were quite as underfilled as they thought. In that case, her team will get feedback from the end users and try to implement the necessary changes to the models.

How do you account for this possibility in the process? Are you able to iterate in an agile manner with the client at the end, even when the stakes are high?

Amar said that she does a lot of check ins with clients along the way to avoid any problems at the point of delivering the model to the client.  She added that an advantage of doing this is that you get important institutional knowledge from the client along the way.  The client can do “gut checks” on the results that the data is giving her and point out any potential problems or issues they see.  She added that if a client points out something wrong with the numbers, she’ll go back and iterate and do checks.

“I work with a lot of CFOS and tax directors and finance directors, and their gut check is amazing. They can sniff a bad number from a thousand paces.”

- Amar Natt

When the stakes are high, however, as in litigation, Amar added this is not possible.  Instead her team has to iterate all along the way to the final product, until they get to the point where they can say to themselves “This is the best we can possible do with the data we have”.

Randi noted that having client domain and institutional knowledge is critical to avoiding errors in the final model. They often have anecdotal and key information regarding data and sources of data that, without that level of domain knowledge, you wouldn't be able to catch.

What do you look for when you’re hiring data scientists and you’re trying to build out a team? What are you looking for skill-wise to counter mistakes in model validation and how do you build a team where people can avoid missteps?

Amar noted that she looks for three things: diversity of background, an openness to learning and the ability to accept being wrong, and being able to quickly understand market drivers and economic structures quickly.

Randi explained that at Dell, she focused particularly on solid professional skills and the ability to communicate with the business side. She said that there are a lot of people who can do the technical side of Data Science, so these professional skills are the differentiators. Randi also looks for people who have a proven ability to work and think independently – who can decipher on their own what questions are worth answering for the business and what approach is useful to try. In addition Randi’s team has been focusing on fostering communication among Data Scientists working within business units in order to leapfrog efforts and build a more powerful team.

What are some thing you can do to create a team culture that promotes challenging each other’s assumptions and methodologies?

Amar said that it is important to create a culture where no one is thrown under the bus. She emphasized that as project leader, if something is wrong she takes the hit with the client. For her firm, the important thing is to find out what went wrong, why, and how to make sure it does not happen again, rather than blaming someone.

“I want everybody to feel OK engaging in that process without feeling like there’s some sort of

a which hunt  that’s gonna take them down.”

- Amar Natt

How do you balance speed versus caution or validation in developing a model for consumption?

Amar said that it depends on the stakes. If the stakes are high, as in litigation, then you must exercise a lot of caution. But if you just need rough answers, speed might be the priority.

Randi added that to make this call you really need input from the business people. She thinks it is really their call to make in terms of how quickly a project moves, a call they should make with the help of data scientists pointing out potential risks and caveats.

“At the end of the day, you need the context from the people whose money it is,

whose end results are going to be impacted to help you balance that risk.”

- Randi Ludwig

 

Amar agreed.

“A Data Scientist’s role is to give those timelines. What are the tradeoffs between speed and accuracy, and how does that affect things, and let the stakeholders decide on what their priorities are.”

- Amar Natt

 

0