Show HN: Zingg – open-source entity resolution for single source of truth
18 points| sonalgoyal | 4 years ago | reply
I am Sonal, a data consultant from India. For the past few months(and years!), I have been working on an entity resolution tool to build a single source of truth for customers, suppliers, products and parts. Here is a short demo of Zingg in action https://www.youtube.com/watch?v=zOabyZxN9b0
As a data consultant, I often struggled to build unified views of core entities on the datalake and the warehouse. Data spread across different systems has variations and consistencies making Customer 360, KYC, AML, segmentation, personalization and other analytics difficult.
As I talked with different clients facing this issue, I searched for existing solutions which I could use or recommend. Unfortunately, most of them were very expensive MDM solutions like Tamr, or CDP solutions like Amperity. There were many open source libraries, but they did not tie well into the datalake/warehouse scenarios we were working with, did not scale and/or needed a decent bit of programming or did not generalize. I even tried to build something internally and failed miserably, and that got me hooked :-)
As I dug deeper into the problem, I realized that there were multiple challenges. Data matching, at its very core, becomes a cartesian join, as you need to compare every pair of records to figure out the matches. With millions of records, this becomes extremely tough to scale. I referred to various research papers and then implemented a blocking algorithm to overcome this. More details at https://docs.zingg.ai/docs/zModels.html
The second challenge was to say which pairs are a match. I wanted to have a machine learning-based approach to handle the different types of entities and the variety of differences in real world data. But I also felt that non ML experts should be able to use Zingg easily, hence took the approach of abstracting the feature generation and hyper-parameter tuning for the classifier.
Once I settled on the ML approach, the problem of training data quickly arose, which led me to pick up active learning and build an interactive labeler through which sample records can be marked as matches and non matches to build training sets quickly. I still feel that we should have an unsupervised approach as well, but I have not yet figured out the right way to do so.
The Zingg repository is hosted at https://github.com/zinggAI/zingg and we have close to 60 members on our Slack(https://join.slack.com/t/zinggai/shared_invite/zt-w7zlcnol-vEuqU9m~Q56kLLUVxRgpOA). We are now two developers working full time on Zingg!!! I am super happy that early users have been able to use Zingg and push us to build more stuff - model documentation, using pre-existing training data, native Snowflake integration etc.
I have been an open source consumer all my dev life, and this is the first time I have made a decent contribution. It is my first time trying to build a community as well. Not sure how the future will unfold, but wanted to reach out to the community here and hear what you think about the problem, the approach, any ideas or suggestions.
Thanks for reading along, and please do post your thoughts in the comments below.
[+] [-] rishsriv|4 years ago|reply
IMO, it would be super useful to have some performance benchmarks – how fast is this for 1k/100k objects? How does that compare to other approaches etc
Not sure how feasible these are, but features I would find super useful:
- string matching across languages in different scripts (with something like unidecode maybe? [1])
- fuzzy matching that includes continuous variables like lat/long, age etc
Excited about using this – will be following the repo very closely!
[1] https://github.com/avian2/unidecode
[+] [-] sonalgoyal|4 years ago|reply
Thanks for liking zingg, super excited to hear this :-) Here are some performance numbers. https://docs.zingg.ai/docs/setup/hardwareSizing.html
We see performance varies by a) Number of attributes to match b) Size of data c) Type of matching and the features we compute for each d) Hardware and cluster size
Although we do not do matching across languages like English with Chinese, we have tested Zingg quite rigorously with Chinese, Japanese, Hindi, German and other languages and it seems to work out of the box. Likely due to the inbuilt Java unicode support and the ML based learning.
You make a great point about continuous variables like lat/long, age etc. Age seems to work, again due to integer differences and the learning. Have not tried lat/long yet. Would you have any dataset you could recommend for testing?
[+] [-] bencastleton|4 years ago|reply
[+] [-] sonalgoyal|4 years ago|reply
[+] [-] navinrathore|4 years ago|reply
[+] [-] javedevux|4 years ago|reply
[+] [-] sonalgoyal|4 years ago|reply
[+] [-] ruchiragarwal75|4 years ago|reply
[+] [-] sonalgoyal|4 years ago|reply