I started using Freeboard (http://freeboard.github.io/freeboard/) in the latest project. It's also really snick and requires no front-end coding. Worth checking out!
Yes, I used Freeboard (https://github.com/Freeboard/freeboard) couple of years ago for an IoT store. It's highly customizable and has fantastic UI. It can be used with minimal effort. Highly recommended.
Really cool, thanks for sharing. Any idea how much data this can ingest before the rendering gets slow? I know horsepower on the client will vary, and the definition of slow is subjective, but just ballpark. Or is that an exercise for the reader?
All great questions. I haven't done any genuine analysis outside of "this seems fine on my MacBook" which I fully admit is not worth much.
Check out the example app and endpoints app, then load the poorly.json and kitchen sink.json files into a new dashboard. I wouldn't expect anyone to make dashboards that extreme and they all render fine (we're talking upwards of 30 charts, many using webgl for 3d).
All that being said, I will continue to make this more optimized as I can.
Does anyone have experience running Flask at scale? I used it once, for a small internal reporting service, and was shocked to find that it was designed to process only a single request at a time, per process. Is this a problem in the real world, or do decent caching rules and query design render it a non issue?
What do flask apps do when they want a live connection to the client, or need to serve a heavy (slow) request? Communicate with a Node websocket server over a queue, and share a database?
I don't mean to disparage Flask. Their goal is to make it simple to stand up site with minimal boilerplate or bloat, and they succeeded at that.
You would typically use something like nginx as a proxy, in conjunction with uwsgi for managing a number of workers, and then offload slow operations to a task queue via redis or something similar. Caching obviously helps if it's applicable and it's also easy to expand to multiple servers with a load balancer. Websockets are a little bit more complicated but definitely possible.
As a side note, the synchronous request processing is more a consequence of Python rather than flask itself. I've personally found that I build more scalable things in Python, compared to something like node, because it lends itself to scalable architecture decisions. You can do a lot of things in node that are super convenient when you have a single server but that require major changes when you expand.
flasks builtin web server is for debug/development use. Run your flask apps under gunicorn, twisted web or any of the other supported servers in production.
I have used Flask at significant scale for REST api requests.
I haven't done web sockets with it - what work I've done with sockets has been in Node.
For building REST apis, it doesn't get easier, IMO. It's very straightforward, it scales well, and it's simplicity makes troubleshooting a reasonable task.
It's appropriateness for slow requests may be questionable, but before spending too much time on a more robust solution it's worth looking into why requests are slow in the first place. Cacheing, message queues, etc. are easy solutions to implement. Data store optimization is generally a quick and easy win that should be done regardless. When it gets to the point where python is the limiting factor it's easy to replace because the client facing front end is generally a proxy.
I run some flask microservices and the key has been to use gunicorn and nginx. It scales up very well. Its not as quick as some other Go microservices I have but Python has an advantage in terms of libraries for my use case. Flask is simple and that helps to keel things under control.
IIRC, flask deploys straight to AWS's elastic beanstalk (gunicorn) with minimal configuration, I even think it's given as an example in EB's docs. I've deployed an EB hosted flask instance (2 ec2 instances behind a load balancer with zero issues) in production.
Nobody ever uses the built-in Flask server for production. The common deployment pattern is to load the Flask app into a WSGI or asyncio serve, which will then handle requests with scalable threading/process models.
Look into uwsgi or gunicorn, and you'll never look back :)
D3.js is one of the many libraries used to generate these kinds of graphics. This project wraps up a bunch of different charting thingies into an easy-to-use combination.
[+] [-] chenster|9 years ago|reply
[+] [-] aaggarwal|9 years ago|reply
The OP's project is also pretty cool.
[+] [-] SmellTheGlove|9 years ago|reply
[+] [-] ergo14|9 years ago|reply
[+] [-] dxdstudio|9 years ago|reply
Check out the example app and endpoints app, then load the poorly.json and kitchen sink.json files into a new dashboard. I wouldn't expect anyone to make dashboards that extreme and they all render fine (we're talking upwards of 30 charts, many using webgl for 3d).
All that being said, I will continue to make this more optimized as I can.
[+] [-] rmanalan|9 years ago|reply
[+] [-] happyslobro|9 years ago|reply
What do flask apps do when they want a live connection to the client, or need to serve a heavy (slow) request? Communicate with a Node websocket server over a queue, and share a database?
I don't mean to disparage Flask. Their goal is to make it simple to stand up site with minimal boilerplate or bloat, and they succeeded at that.
[+] [-] foob|9 years ago|reply
As a side note, the synchronous request processing is more a consequence of Python rather than flask itself. I've personally found that I build more scalable things in Python, compared to something like node, because it lends itself to scalable architecture decisions. You can do a lot of things in node that are super convenient when you have a single server but that require major changes when you expand.
[+] [-] ntenenz|9 years ago|reply
https://www.youtube.com/watch?v=tdIIJuPh3SI
https://github.com/miguelgrinberg/flack
https://speakerdeck.com/miguelgrinberg/flask-at-scale
[+] [-] doh|9 years ago|reply
I used Nginx as a HTTP proxy and gunicorn for WSGI. It worked decently well, although it was pretty resources intensive (CPU, RAM).
Today you can use something like Caddy[0] and Gunicorn[1]
[0] https://caddyserver.com
[1] http://gunicorn.org
[+] [-] noselasd|9 years ago|reply
[+] [-] Steeeve|9 years ago|reply
I haven't done web sockets with it - what work I've done with sockets has been in Node.
For building REST apis, it doesn't get easier, IMO. It's very straightforward, it scales well, and it's simplicity makes troubleshooting a reasonable task.
It's appropriateness for slow requests may be questionable, but before spending too much time on a more robust solution it's worth looking into why requests are slow in the first place. Cacheing, message queues, etc. are easy solutions to implement. Data store optimization is generally a quick and easy win that should be done regardless. When it gets to the point where python is the limiting factor it's easy to replace because the client facing front end is generally a proxy.
[+] [-] asimuvPR|9 years ago|reply
[+] [-] Tech1|9 years ago|reply
[+] [-] rcarmo|9 years ago|reply
Look into uwsgi or gunicorn, and you'll never look back :)
[+] [-] pmalynin|9 years ago|reply
WSGI is partly to blame.
A web technology derived from the '93 has no place in 2016.
[+] [-] greglindahl|9 years ago|reply
[+] [-] fiatjaf|9 years ago|reply
[+] [-] oneloop|9 years ago|reply
[+] [-] bpchaps|9 years ago|reply
[+] [-] bjt|9 years ago|reply
[+] [-] denfromufa|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] oneloop|9 years ago|reply
[+] [-] greglindahl|9 years ago|reply
[+] [-] baddjobcent|9 years ago|reply
[deleted]