Hacker Newsnew | past | comments | ask | show | jobs | submit | vladpowerman's commentslogin

Hunterizer is building what we call “hunting layer intelligence” - a rules engine that combines hunting seasons, zones, weapon restrictions, and land ownership into a single location-based view.

Given a specific coordinate and date, the system determines: • what species are in season • which zone applies • whether the parcel is huntable or non-huntable • weapon constraints • boundary proximity warnings

We’re also adding ecological layers such as big game food sources (hard mast, soft mast, browse) to help hunters understand habitat context, not just legal status.

The goal is to reduce regulatory ambiguity in the field by translating fragmented state PDFs and shapefiles into structured geospatial logic.

Feedback from engineers and GIS folks especially welcome.


Great read. I’ve been modeling developer activity as a time series key value system where each developer is a key and commits are values. Faced the same issues: logs grow fast, indexes get heavy, range queries slow down. How do you decide what to drop when compacting segments? Balancing freshness and retention is tricky.


I'm curious how much data you have? I have 12 years of dev data and reports are generated in seconds, if not milliseconds. What is your key patterns? It sounds like a key-design problem.


Great point, that’s definitely one of the biggest limitations right now. GitCruiter only sees public repos, so it naturally misses the context of people’s real work.

I’m exploring ways to let developers optionally connect private repos or upload anonymized activity snapshots, if there’s enough community interest to make privacy and consent solid.

Totally agree this would make the results far more representative. Thanks for raising it.


The compression framing is super interesting. It makes me wonder if there’s an equivalent notion for source code - like how much “information” or entropy a commit contains vs. boilerplate churn.

I’ve been exploring Git activity analysis recently and ran into similar trade-offs: how do you tokenize real-world code and avoid counting noise?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: