We "do ML" for large organizations as a tiny consultancy. The way we've been able to improve the working conditions for ourselves (developers and data scientists) was by focusing on two things:
- Process: we analyzed what worked and what didn't in past projects. Continuously auditing and trying to extract learnings. We made sure people we built for at the client organization were involved. We scoped more thoroughly. We involved parts client organization that could torpedo the project downstream (legal, security, etc) upfront. Made fewer assumptions. Listened more.
- Tooling: we built a machine learning platform[0] to make sure a data scientist doesn't tap on anyone's shoulder to troubleshoot their system, set up their computing environment, or deploy their model. They could do it themselves. Furthermore, it wasn't necessary to get people who could move across the stack.
Changing our processes and the way we do consulting had a huge impact. A badly scoped project will in some way or another create toil downstream and create a situation where you need people to do full-stack and you need "all-hands-on-deck" constantly. That's just bad, and after we ruthlessly reworked the process, we had better results, better relations with clients, better cadence, etc. I emphasize on this because we were a larger team at some point running around working on so many projects simultaneously that everyone was practically burned out.
Thanks. It fell between the cracks on HN, and I didn't want to re-submit it not to be spammy.
Although we technically added multi Kubernetes cluster support. It was only GKE, and now it runs notebooks and workloads on AWS EKS, Azure AKS, and DigitalOcean as well. I'm not sure it's enough of an improvement according to the Show HN rules to re-submit. Plus I'm reworking the landing page and docs to add more clarity on what this thing does, with gifs showing RTC and all.
Your headline "Get Data Products Right" is much more vague than the first sentence of your Show HN: "iko.ai offers real-time collaborative notebooks to train, track, deploy, and monitor models"
I would update both the title tag and that headline to be a condensed version of that sentence. I'd also suggest considering the buzzword "lifecycle" to merge write/deploy/track/monitor (test?): "Collaborative notebooks for your ML-model lifecycle".
Thanks, boulos. (I considered sendig you a weird incident on GCP, by the way).
>Your headline "Get Data Products Right" is much more vague than the first sentence of your Show HN: "iko.ai offers real-time collaborative notebooks to train, track, deploy, and monitor models"
In the current draft, the headlie stays because it's the goal but the sentence "The machine learning platform for real world projects" is replaced by "Real-time collaborative notebooks to train, track, deploy, and monitor your machine learning models".
>I'd also suggest considering the buzzword "lifecycle" to merge write/deploy/track/monitor (test?): "Collaborative notebooks for your ML-model lifecycle".
I considered it, and even to use MLOps, but I'll postpone it for now. Every "validate-the-market" landing page claims "end-to-end lifecycle management no-code MLOps AI", therefore I wanted to be humble, thus specific in what this does for now.
The docs will also be improved and the "UX flow" as well to get the users unstuck from sign-in to job done smoothly. We won't look at making it pretty for now, though.
- Process: we analyzed what worked and what didn't in past projects. Continuously auditing and trying to extract learnings. We made sure people we built for at the client organization were involved. We scoped more thoroughly. We involved parts client organization that could torpedo the project downstream (legal, security, etc) upfront. Made fewer assumptions. Listened more.
- Tooling: we built a machine learning platform[0] to make sure a data scientist doesn't tap on anyone's shoulder to troubleshoot their system, set up their computing environment, or deploy their model. They could do it themselves. Furthermore, it wasn't necessary to get people who could move across the stack.
Changing our processes and the way we do consulting had a huge impact. A badly scoped project will in some way or another create toil downstream and create a situation where you need people to do full-stack and you need "all-hands-on-deck" constantly. That's just bad, and after we ruthlessly reworked the process, we had better results, better relations with clients, better cadence, etc. I emphasize on this because we were a larger team at some point running around working on so many projects simultaneously that everyone was practically burned out.
-[1]: https://news.ycombinator.com/item?id=28373127