n-e-w | 2 years ago
n-e-w's comments
n-e-w | 2 years ago | on: Harvard CS50: How to build apps with GPT4
n-e-w | 3 years ago | on: How Google's Chief Decision Scientist Makes Resolutions
n-e-w | 3 years ago
n-e-w | 3 years ago
It's not 1920. The modern middle manager has more invested in his fantasy football league than his direct reports -- and by some magnitude. Grow up.
n-e-w | 3 years ago | on: AUS Attorney-General Claims Ability to Intercept Signal / WhatsApp Messages
n-e-w | 3 years ago
[1] https://www.cell.com/iscience/fulltext/S2589-0042(22)01141-5...
n-e-w | 3 years ago
I would say in 2018-2019 the ML / Deep learning stack was actually very impressive compared with what was available in FOSS. Now -- it's languishing. Badly.
n-e-w | 3 years ago
It really is an astounding system. If you enjoy functional programming, the language is super expressive and productive. The integrated libraries are fantastic -- especially if you want to chain together different knowledge domains through a consistent interface and syntax. You can get the entire Wolfram Alpha knowledgebase and curated datasets to use in your programs immediately.
The front-end is just lightyears in front of Jupyter notebooks in just about every conceivable way.
This isn't to hide that there are some frustrating things about the language (like, machine learning development has just stopped dead after a couple of really great years of feature dev...why?) and, especially, the organisation and some of its personalities. But the product -- it really should get a lot more kudos and wow factor based on real accomplishments and features that are live, in the real world.
I love Wolfram stuff.
n-e-w | 3 years ago | on: PyTorch Beat TensorFlow, Forcing Google to Bet on Jax
For non-subscribers
n-e-w | 3 years ago
Hospitals are glorified hotels with super over-priced roomservice and activities. But the business model is very similar.
n-e-w | 3 years ago | on: The WOPR in WarGames Was an Apple II
n-e-w | 3 years ago
Less well known is uric acid's effect on hypertension -- which would also perhaps (anecdotally) explain likely elevated levels of heart attack and stroke in that same demographic.
All terribly confounded, though. Hard to unravel chickens from eggs, lifestyle factors etc.
n-e-w | 3 years ago | on: Why do top achievers drink Diet Coke?
I’ve had a non-trivial “addiction” to Diet Coke in various times of my life. MDs and Neuroscientists please weigh in! When I put my Bayesian hat on, I think this claim falls over (lots of people in general drink Diet Coke so the background numbers are high and people are likely to self-select as being “more intelligent” etc etc)
n-e-w | 3 years ago
n-e-w | 3 years ago
This observation is absolutely spot on. An ignored element of meritocracy is visibility -- and, perhaps, as you suggest, inertia. And then we end up back at the black hole of designing the 'right' incentive schemes...
sigh
n-e-w | 3 years ago
n-e-w | 3 years ago
n-e-w | 4 years ago
Compassionate capitalism is more needed today than ever. Sure, this is likely just some low-level buffoon at a chain mouthing off in ignorance. But that's why it's dangerous; toxic capitalism has thoughtlessly metastasized to the middle.
Given the severe and extreme states most commodities markets have found themselves in of late, I expect an exacerbating withdrawal of liquidity by most market participants -- even posting margin for normal hedging requirements is beginning to require completely untenable amounts of free cash flow. I fear that severe inflation is very much looming at the door.
n-e-w | 4 years ago
And that kind of hustle is a prerequisite to solid business success anyway. So, get after it!
“The data sets were randomly divided into training (85%) and test (15%) sets. We used 10-fold cross-validation to obtain generalized results of model performance. Data splitting was performed at the participant level and stratified based on the outcome variables. Because the data classes were imbalanced for symptom severity (ADOS-2 and SRS-2), we performed a random undersampling of the data at the participant level before conducting data splitting. Moreover, we examined different split ratios (80:20 and 90:10) to assess the robustness and consistency of the predictive performances across diverse splitting proportions.”
* undersampling is problematic here and probably introduced some bias. These imbalanced class problems are just plain hard. Claiming one hundred percent on an imbalanced class problem should probably cause some concern. * data split at the participant level has to be done really careful or you’ll over fit * multiple comparisons bias by testing multiple split ratios on the same test data. Same with the 10-fold cross Val. * not sure if they validated results on any external test data * outcome variable stratification also has to be done really carefully or it will introduce bias; seems particularly sensitive in this case * using severity of symptoms as class labels is problematic. These have to really have been diagnosed the same way / consistently to be meaningful.
I also note a long time history in collection of these images (15 years iirc). Hard to believe such a diverse set of images (collection, equipment etc) led to perfect results.
ML issues aside, super interested in the basic medical concept. I wasn’t aware retinal abnormalities could be indicative of issues like ASD.