(no title)
written-beyond | 11 days ago
I feel very moronic making a dashboard for any products now. Enterprise customers prefer you integrate into their ERPs anyway.
I think we lost the plot as an industry, I've always advocated for having a read only database connection to be available for your customers to make their own visualisations. This should've been the standard 10 years ago and it's case is only stronger in this age of LLMs.
We get so involved with our products we forget that our customers are humans too. Nobody wants another account to manage or remember. Analytics and alerts should be push based, configurable reports should get auto generated and sent to your inbox, alerts should be pushed via notifications or emails and customers should have an option to build their own dashboard with something like this.
Sane defaults make sense but location matters just as much.
oogali|11 days ago
Roughly three decades ago, that *was* the norm. One of the more popular tools for achieving that was Crystal Reports[1].
In the late 90s, it was almost routine for software vendors to bundle Crystal Reports with their software (very similar to how the MSSQL installer gets invoked by products), then configure an ODBC data source which connected to the appropriate database.
In my opinion, the primary stumbling block of this approach was the lack of a shared SQL query repository. So if you weren’t intimately aware with the data model you wanted to work with, you’d lose hours trying to figure it out on your own or rely on your colleagues sharing it via sneakernet or email.
Crystal Reports has since been acquired by SAP, and I haven’t touched it since the early ‘00s so I don’t know what it looks or functions like today.
1: https://en.wikipedia.org/wiki/Crystal_Reports
skeeter2020|11 days ago
yesbabyyes|11 days ago
I was a developer albeit not professionally, and my boss gave me the opportunity to develop the integration between Agresso and Crystal Reports, my first professional development project, for which I am still grateful. It was a DLL written in C++ and I imagine they shipped that for quite a while after I left for greener pastures.
I was already a free software and Linux enthusiast, so I did a vain skunkworks attempt at getting Agresso to run with MySQL, which failed, but my Linux server in the office came in handy when I needed some extra software in the field--I asked a colleague to put a CD in the server so I could download it to the client site some 500 km away, and deliver on the migration.
written-beyond|11 days ago
AgharaShyam|11 days ago
Customers need it to build custom reports, archive data into a warehouse, drive downstream systems (notifications, audits, compliance), and answer edge-case questions you didn’t anticipate.
Because of that, I generally prefer these patterns over a half-baked built-in analytics UI or an opinionated REST API:
Provide a read replica or CDC stream. Let sophisticated customers handle authz, modelling, and queries themselves. This gets harder with multi-tenant DBs.
Optionally offer a hosted Data API, using something like -- PostgREST / Hasura / Microsoft DAB. You handle permissions and safety, but stay largely un-opinionated about access patterns.
Any built-in metrics or analytics layer will always miss edge cases.
With AI agents becoming first-class consumers of enterprise data, direct read access is going to be non-negotiable.
Also, I predict the days of charging customers to access their own goddamn data, behind rate-limited + metered REST APIs are behind us.
conormccarter|11 days ago
The CDC stream option you flagged is more viable in my (admittedly biased) opinion. At my company (Prequel) our entire pitch is basically "you should give your customer's a live replica of their data in whatever data platform they want it in" (and let us handle the cross-platform compatibility & multi-tenant DB challenges).
I think this problem could also be a killer use case for Open Table Formats, where the read-replica architecture can be mirrored but the cost of reader compute can be assumed by the data consumer.
To your point, this is only going to be more important with what will likely be a dramatic increase in AI agent data consumption.
mitjam|11 days ago
I was part of this and "saw the light". We had such a great visibility into all the processes, it was unreal. It tremendously sped-up cross-org initiatives.
Today, I guess, only agents get that privilege.
jorin|11 days ago
written-beyond|11 days ago
owlstuffing|11 days ago
I get your point, but generally with most enterprise-scale apps you really don’t want your transactional DB doubling as your data warehouse. The “push-based” operation should be limited to moving data from your tx environment to your analytical one.
Of course, if the “analytics” are limited to simple static reports, then a data warehouse is overkill.
mrits|11 days ago
written-beyond|11 days ago
matsz|11 days ago
A layer on top of the database to account for auth/etc. would be necessary anyways. Could be achieved to some degree with views, but I'd prefer an approach where you choose the publicly available data explicitly.
GraphQL almost delivered on that dream. Something more opinionated would've been much better, though.
written-beyond|11 days ago