Choosing loss functions

Choosing a loss function is an important step in setting up a well-designed machine learning task. It’s a choice that requires domain and business context. It also often requires some amount of technical experience. Finally, it’s something you probably don’t want to change too often or at all.

So a up-front, somewhat irreversible decision that requires expertise and weigh-in from multiple disciplines. Super fun, right? Let’s talk about what goes into this kind of decision, what loss functions entail, and then how you can pick the best one. I’ll also have a list of common loss functions toward the bottom.

Continue reading “Choosing loss functions”

Model performance often degrades over time

An extremely painful, easily missed issue with machine learning products is that their performance will tend to degrade over time. Generally speaking, the best day of a new model’s life is its last day in development. Performance will likely take a hit the moment it hits production and slowly degrade from there. This is totally normally and simply something to prepare for as your data products become more and more highly developed.

The pain of lost opportunity can be subtle or dramatic. We often spend a lot of time developing data sources and inferential products. We struggled to get them to achieve strong performance in our lab tests. After spending all that time, it can be easy to hold high expectations to the model performance. Really, though, lab performance should be thought of as something closer to a soft upper-bound on live model performance.

In practice, model performance can be severely impacted almost immediately. It can slowly degrade over time in ways that are more subtle but leave just as big of a gap. Even a very advanced model can perform only randomly if the context it has been deployed in changes significantly. Finally, model degradation is difficult and expensive to measure in the lab. It’s possible you won’t even know how bad the degradation problem will be until the model is live. It’ll just show up later in the bottom line.

Continue reading “Model performance often degrades over time”

Checklists for data product staging

Naming the level of development of a data product is a matter of judgement and experience, but the following list of questions can help you develop that judgement and be consistent in its application. Feel free to use it as a starting point for your own checklist.

Data projects come in many shapes and sizes and forms. If there are questions you use to judge a project’s maturity or if there’s a major aspect of a data project missing below, please email me at joseph@simplicial.io and I’ll add it.

Continue reading “Checklists for data product staging”

Avoiding murky data science projects

Paying for data science projects can be stressful.

The dream is straightforward application of existing, high-power statistical and machine learning technologies to relevant, existing data. Flip a switch and out pops tools for better decision making or new products for your customers. The reality is that none of that is straightforward and, in the worst cases, you end up in research hell.

A big part of data science is learning and exploration. You may not know what you don’t know, you may not know what opportunities exist in data that you have access to (or could easily get access to). So, when you charter a data science team to solve a business problem you may be setting off on a long, murky journey.

Research hell is when these projects struggle to deliver, but stay tantalizing. You invest and invest and wait and wait and the project trudges on.

And on. And on. And on… Misery.

Now, instead of jumping on whole new opportunities born of your investment into data you’re nursing a murky plan and managing a distressed, disconnected long-term research team. Or, worse, left judging a science fair.

Continue reading “Avoiding murky data science projects”