I checked out the Wired article. Not recommended seems to mostly be about the fact that it's a tiny display, which some people (like the reporter) will have trouble to read. That the screen degraded didn't help of course. The reporter doesn't want to use a web dashboard to check out the readings on his indoor air monitor. I think that's a fair comment, maybe just a bit harsh to put it in the not recommended bucket. I understand that this can affect sales quite a bit for a small supplier like AirGradient.
Looks like the AirGradient went up against a similar device with a huge colour screen that's easy to read. No wonder they preferred that, although I'm not sure if the $100 price difference is worth it for other people.
The AirGradient screen isn't even that small, but the UI can be much more user-friendly IMO. There's a reason all the other meters with screens do HUGE NUMBER+tiny label.
I'm sure many people will prefer the AirGradient, but I don't think the reviewer is wrong for having different preferences.
I don't think any of this review is unfair. Shipping a broken product and then covering the fix with your warranty is better than shipping a broken product and telling the consumer to get bent, but worse than shipping a working product. A product that advertises fewer features and delivers them is better than a product that advertises more features and doesn't deliver them. Even in the case of the CO2 sensor, it may be your opinion that a CO2 sensor is critical and can't be done without, but the review is for the feature set at the price point. A device that does fewer things can be a better device depending on price point and, again, reliability.
If, however, I concede the author's idea that reviews must have objective criteria, methodology and standards in order to be taken seriously then I'd like to propose the first objective criterion: broken out-of-the-box === not recommended.
edit: evidently the device failed after a few months. This doesn't change my final opinion, which is in total agreement with the review, but it deserves to be mentioned because I was incorrect in my facts. For my fellow JS devs, I'm standing by broken out-of-the-box === not recommended, and adding broken within a few months of installing == not recommended.
That points to a lack of QA on your part and, I think, it is fair for a reviewer to point out.
Even if you have an exemplary warranty process and easy instructions, that's still a hassle. Not everyone has the confidence or the time to repair simple things.
As for the objective/subjective nature of reviews. Are your customers buying air monitors for their 100% precision or for "entertainment" purposes / lifestyle factors?
I have a cheap Awair air monitor. I have no idea if it is accurate - but it charges by USB-C and has an inconspicuous display. That's what I wanted it for.
It is perfectly fair for a reviewer to point out their personal preferences on something like this. They aren't a government testing lab.
A few thoughts from me on the discussion so far, which I find incredibly insightful. Many thanks to everyone sharing their perspectives. I truly appreciate it.
On subjective reviews:
I think there's absolutely nothing wrong with reviews based primarily on an author's subjective opinion. However, such reviews should be appropriately labeled. For example, "My Favorite Air Quality Monitors" rather than "The Best Indoor Air Quality Monitors". The title sets reader expectations for objective evaluation with consistent methodology.
On the defective display:
Important clarification: we did not ship a broken device. The display issue developed during the review period—this wasn't a QC failure on our part. Hardware can fail during use (as it can with any electronic device), which is exactly why we immediately offered replacement parts, a new unit, and detailed repair instructions when we learned about it.
On the tiny display and lessons learned:
We're well aware that opinions on our display vary significantly, as evidenced by this discussion. Some users love it, others find it too small. We actually have differing opinions within the AirGradient team as well.
We're planning a refresh of our indoor monitor next year and are currently testing around 10 different display types—including e-ink, colour OLED, touchscreens, and others. So far, we haven't found the ideal replacement, but we're planning to involve our community later this year to gather feedback on the various options.
Super smart move. I hadn't heard of you folks before, but I'm interested in your product - open source and repairability are high on my list for home monitors. I'm lying in bed awake right now sir to an air quality issue, so it's top of mind.
The only thing you're missing for me is radon detection. I just bought a house and tests came in below remediation levels, but the report showed a lot of spikes and variance. So you have any plans for a model with radon detection in the future?
I think your concerns are legit, but it's not necessarily the reviewer's fault that they're on three deadlines and don't have the time to give your product the care and concern it deserves. It's probably the editor's or the publisher's.
I'm a world class writer but I stopped doing it for a living a long time ago. Why? Because as media moved from print to online, the work was devalued. I've worked for 25 cents a word sometimes, which was pretty decent when one 1200 word piece could pay rent back then. Nowadays, writers are offered $25 per article flat with no compensation for rewrites. Staff positions pay badly for too much work but are as coveted as C suite gigs are in the tech world. Maybe more so.
So if the reviewer is staff, they might be assigned three or four reviews in a given week on top of other work. If they're freelance, they might have to take on more just to make their rent. This is because your average magazine staffer who's not management pulls about as much as a Starbucks manager, and was ever thus, unless you got in at Vanity Fair or The Atlantic back in the Before Times.
It's like when I was reviewing albums for $50 a pop: I'd get a stack of them to review and cue up track one and if I didn't get hooked pretty quick, I'd just pop in the next one.
Your device arrived damaged, which is absolutely no one's fault, but your reviewer doesn't have time or honestly impetus to give it a second chance. Not for whatever they're getting paid for that review, which is not much at all.
It's just bad luck, is all. And yes, it's not fair and, yes, you're right to complain, but it's not as simple as "tech writer lazy".
(And if anyone's response is "They accepted the job, they should do their best at it no matter how little it pays", I'm guessing you've never had to duck your landlord to try not to get evicted before the freelance check you've been hunting up for three weeks arrives. There's a reason I'd rather make a living as a mediocre coder than a very good writer these days - at its worst, the tech industry is more renumerative and stable than the publishing industry is.)
"is that this review is ... pretty much purely based on the personal preferences of the author."
You've found the core takeaway about nearly all "product reviews" in nearly all publications. They are almost all simply "the personal preferences of the author".
These authors have neither the time, nor the science skills, for anything even beginning to look like a rigorous scientific review, and so the "best" vs. "ok" vs. "not recommended" tags applied result because the author liked the particular shade of pink used on a trim piece on one, or liked that another one looks like the Apple computer they are using, and so forth.
But they are never based upon any objective criteria, and are never (nor ever were intended to be) reproducible in any scientific fashion.
Yet, as you say, they have "great power" to influence buying decisions on the part of folks who read their reviews.
First, the thing I'm not really seeing mentioned anywhere here in the HN comments is that a separate AirGradient sensor was #3 on the list of "recommended" sensors, and was specifically called "Best Budget Quality Air Monitor". I also can't seem to find this mentioned in the piece that you wrote, either. Why not highlight that success?
You write:
>How can a product be penalized for a failing display when another recommended product has no display?
This is an incredibly perplexing take. A display is subjective - whether or not the customer wants one is up to the customer. What the customer does want is a functional product, so regardless of what another product's features are, if that product functions as intended and yours does not, of course it's going to be recommended over yours.
>How can an indoor monitor without CO2 sensing - essential for understanding indoor air quality - be recommended over one that includes this crucial measurement?
Again - the products without CO2 sensors functioned as intended. It is indeed accurate that CO2 is one of the most critical metrics for assessing indoor air quality, but it goes back to my previous comment - perhaps the customer is more interested in PM2.5 indoors than CO2 for a specific reason. We don't know. Ultimately, the CO2-less sensors functioned as intended, whereas yours did not.
You go on to say:
>And specifically for situations like this: How would you want us to handle it? Should companies stay quiet when review methodology breaks down? Should we be more aggressive in calling this out? Or is transparency and open discussion the right approach?
Maybe focus less on one review and more on improving the product? As another comment states, you shipped a broken product and it suggests that there's a problem with your QA process. Further, you state early on:
>Let me be clear: this was a legitimate hardware failure, and we take full responsibility for it. As soon as we learned about the issue, we immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators.
Let's maybe hear more about that. How/why did the hardware fail? Did you examine your QA process and make any improvements to it? Highlight these steps, as well as the "core differentiator" that is your repairability, rather than asking perplexing questions about why one reviewer didn't like your product.
As an "average Joe" customer in this area, the whole response feels excessive and... whiny (driven by the fact that you don't highlight that you did, in fact, have a product on the list that was well recommended). I don't say that to be terribly mean, it's just a bit off-putting. You're not necessarily wrong about product reviews like this in general, but like... who cares? Put the effort into making a solid product, not taking umbrage with one person's opinion.
There will be more reviews, and some of them will be negative. You're not going to be able to control perception and opinion, and nobody will ever get perfect marks from everyone. Learn to be OK with that.
Edit: I just saw your response about this not being a hardware failure when shipped. Still, the general concept of my point remains - detail what you're doing to determine how this happened and prevent it in the future, rather than griping about the review process. If "transparency is how [you] operate", lemme hear the deets about this issue!
I have your product. I like your product. I like your company. Fundamentally, I can't disagree with the review. You got unlucky, and the reviewer was looking for a different product. It happens.
It's not a universal product. Competitors have upsides and downsides, and different people want different things.
I think this is the time to move on, and focus on people so like and appreciate you, and not dwell on those who don't. Success brings more of both, and if you can't handle a few haters, you probably don't want to be too successful.
And reviews are imperfect, but a lot better than no reviews. Accept life isn't perfect.
Thanks for a great product and for running a company with integrity.
There seems to be a trend of companies attacking reviewers and claiming their feelings got hurt when they dislike the review, Nothing CEO is reacting to reviews on youtube, Rabbit R1, the Humane AI pin, Fisker cars, and not AirGradient i guess.
To anyone else reading, I would recommend, AirGradiant is the only real contender that checks all the boxes for an air monitor. Love the way they spun this narrative.
Wired sold-out to Condé Nast long ago. They're the tired ones.
This sounds like something Louis Rossmann should cover as a counter-example of mfgrs trying to do the right thing but fickle, corporate reviewers behaving in a petty, unfair manner.
The reviewer bought the product and received a broken unit. Why is it unfair to write about their actual experience with the product? Sure, warranty exists but none of the other products being tested needed a warranty cycle.
AirGradient (and several commenters here) feels like they're trying to spin their own QC problems as an indictment of modern journalism.
I actually tried to reach out to Louis Rossmann a few times but haven’t got a response (yet).
I think what’s most interesting is that we figured out a business model based on open source hardware that’s sustainable. Thus a win-win for the manufacturer and the customer.
Repairability was actually a feature we design into the product from the start.
My immediate reaction when landing on one of these "top 10 X products" are that the list is sorted according to the size of the kickback the author gets when users click on affiliate links. For the most part I don't believe there is no such thing as a legitimate product review on the internet any more.
There is a vanishingly small collection of youtubers that I might still trust when it comes to product reviews, and that list is shrinking.
Not exactly on topic, but does anyone else feel that the bolded key phrases actually makes it harder to read? I find my eyes jumping between them without absorbing the rest of the text.
I wouldn’t worry too much tbh if I was Airgradient. I don’t think anyone trusts Wired for serious tech reviews and the target audience would veer towards plug and play crowd anyway.
My airgradient monitor has been online for years and sending data to Prometheus reliably. I’ve been able to plot the air quality across a few climate events and the introduction of a Samsung air filter in my bedroom. It’s a good little product.
I own several AirGradient monitors and have used other brands in the past. As far as I am concerned AirGradient is clearly superior, not only for ease of use, repairability and their open source approach, but also because of their tremendous enthusiasm for getting accurate data and being totally transparent about the strengths and weaknesses of the technology.
I have one and I generally agree that the screen sucks, it could really use an upgrade.
Also it’s a pain in the a to zero out the co2 sensor the first time.
I would probably not recommend it to someone who does not like to dabble a lot with the tech. It’s not really a it just works and it’s easy for everyone product.
Rotten luck but at the same time reviewers review the device in front of them. They can’t really throw that experience out on the basis of some assumption that it’s not representative
I have one of these AirGradient indoor units. I also have a dedicated RadonEye Bluetooth device and Airthings wave Pro.
The oled display is nice, but I rarely care in realtime what the exact metrics are. I have that stored as time series stats sso I can see trends over time. Exactly like I do for metrics of production systems in SRE life.
The unit also has a series of LEDs across the top and I can read the actual status from 20’ away (which is as far as I can get without going out a window or around a corner).
One green led? Good.
Two green leds? Meh.
Three LEDs? They’re red now and that’s not great
Single red led in the top left in addition to any on the right? Spectrum is having yet another outage (no internet)
No LEDs? Not powered on
Reviewer was overly severe and did his readers a disservice.
It’s better imho than my Airthings wave pro, and it lets me get awesome, actionable time series data. It’s sensitive enough to show air quality taking a dive overnight with two people and multiple pets breathing in the same room (one green during the day, three or four red at night), and also to show that adding a half dozen spider plants keeps co2 well in check (consistent one green led).
And I can read the air quality from across the room without getting out of bed.
Yeah, gamified air quality on my wife accidentally on installing this. She really wanted to see green on the LEDs. Also drove home the fact that using a gas stove hits the air quality in the house for an hour or two, that's now on the replacement list.
The fact that I can keep using this even if the vendor goes out of business was a major selling point, but also home assistant integration.
I highly recommend these(I have an indoor and outdoor units)
Open source performs better, but is less convenient/accessible than a polished consumer product with inferior technical chops?
Even ignoring the broken display, which I think is a red herring here (it would be relevant if this unit had a pattern of quality issues or failures indicating systematic production issues), I think that's the story here.
I appreciate the response from airgradient, assuming it's all true.
The product with fewer features being recommended makes sense if they do those things well. The customer buying it is aware of the limitation, whereas the customer buying your unit getting a broken display is a disappointment. Maybe it was an unfair review, but this comparison is lame.
Shipping them a product with a broken display implies shoddy manufacturing and crap quality control. Any other customer could end up buying this substandard product too, and being disappointed by the lack of workmanship.
Why not take responsibility for that instead of complaining about an honest review?
The review clearly states that the screen failed after a few months. How exactly do you expect them to QC an issue like that? If the part arrives at your factory in full working order, you don't typically spend much time trying to make it fail unless you discover that there is a latent issue that affects a large enough proportion of those parts where the cost of screening becomes worth it.
Every volume product has failures and a single datapoint is not enough to say anything about a product's quality (good or bad). At the same time, a failure during a review is absolutely something that should be mentioned, as well as given an opportunity to test the company's RMA process. At the very least a failure like that should cause someone to look into how many other's online have similar issues.
> Shipping them a product with a broken display implies shoddy manufacturing and crap quality control.
Eh, what? I've received products that needed warranty repair/replacement from Apple, Toyota, Philips, Nintendo, Ikea, Breville, etc. (All of those examples which provided good service in repairing/replacing the product in question.)
A single data point of a broken product doesn't tell you anything of value.
I would much rather prefer AirGradient over IQAir AirVision Pro, despite WIRED’s recent evaluation favouring the latter. Source: I “own” (see below) an IQAir AirVision Pro, which I bought directly from IQAir’s website, and I have seen an AirGradient unit in someone else’s home.
One reason is, in fact, the screen. AirVision Pro’s display is bright, but it is bright all of the time; it cannot be made dim enough to make it suitable for use in the bedroom, for example: the blueish white LCD is basically a small light fixture. Furthermore, the contents of the screen are well-readable only at a narrow angle (think when you look straight at it; putting it on top of a tall fridge or down on your windowsill makes it illegible). I would much rather prefer an e-ink display.
Second, on their website IQAir states that their air monitors are made in Europe[0]. This is a false claim. In fact, AirVision Pro is made in PRC, as it declared on the box. I would not be against a good product made in PRC, and AirVision Pro is in fact known for good cheracteristics regarding accuracy, but it seems like a dark pattern at best, and they clearly want to mislead customers.
Third, the enclosure featured a charging USB port (which is an obsolete micro USB variety incredibly hard to find cables for) that was very finicky and gave up the ghost 3 months in. The device just wouldn’t charge its battery or see any power at all thereafter, so it basically became a brick of cheap plastic for all intents and purposes. I can’t be bothered to disassemble the enclosure and try to repair it since I can’t stand the bright screen anyway and I already got the hang of air quality patterns where I live.
It did the job, sure, but if AirGradient’s PM2.5 and carbon dioxide detectors do nearly as good of a job[1] it makes it a much more compelling option for me.
Unfortunately, as of the time I last checked, AirGradient shipped to a small set of countries which did not include my area; by comparison, IQAir has a much wider coverage.
[0] You can still see the proud large “Swiss made” in the relevant section of their website (https://www.iqair.com/products/air-quality-monitors). Furthermore, if you Google the question, the LLM-generated answer suggests that
> The IQAir AirVisual Pro air quality monitor is Swiss-designed and manufactured. While IQAir is headquartered in Switzerland, their manufacturing facilities for air purifiers, including the AirVisual Pro, are located in both Switzerland and Southern Germany
cinntaile|6 months ago
It's linked in the article but here it is. https://www.wired.com/gallery/best-indoor-air-quality-monito...
floppyd|6 months ago
> - Our monitor: Downgraded due to a faulty display (a warranty-covered hardware issue).
> - Another Monitor: Recommended, despite having no display at all.
> - Another Monitor: Also recommended, despite lacking a CO2 sensor—one of the most critical metrics for assessing indoor air quality and ventilation.
jeroenhd|6 months ago
The AirGradient screen isn't even that small, but the UI can be much more user-friendly IMO. There's a reason all the other meters with screens do HUGE NUMBER+tiny label.
I'm sure many people will prefer the AirGradient, but I don't think the reviewer is wrong for having different preferences.
Mashimo|6 months ago
ratelimitsteve|6 months ago
If, however, I concede the author's idea that reviews must have objective criteria, methodology and standards in order to be taken seriously then I'd like to propose the first objective criterion: broken out-of-the-box === not recommended.
edit: evidently the device failed after a few months. This doesn't change my final opinion, which is in total agreement with the review, but it deserves to be mentioned because I was incorrect in my facts. For my fellow JS devs, I'm standing by broken out-of-the-box === not recommended, and adding broken within a few months of installing == not recommended.
beAbU|6 months ago
mrgoldenbrown|6 months ago
ahaucnx|6 months ago
I spend quite a long time writing this post and it actually helped me to see the bigger picture. How much can we actually trust tech reviews?
I am already getting very interesting results from the survey I posted and already planning to write a follow up post.
edent|6 months ago
That points to a lack of QA on your part and, I think, it is fair for a reviewer to point out.
Even if you have an exemplary warranty process and easy instructions, that's still a hassle. Not everyone has the confidence or the time to repair simple things.
As for the objective/subjective nature of reviews. Are your customers buying air monitors for their 100% precision or for "entertainment" purposes / lifestyle factors?
I have a cheap Awair air monitor. I have no idea if it is accurate - but it charges by USB-C and has an inconspicuous display. That's what I wanted it for.
It is perfectly fair for a reviewer to point out their personal preferences on something like this. They aren't a government testing lab.
ahaucnx|6 months ago
On subjective reviews: I think there's absolutely nothing wrong with reviews based primarily on an author's subjective opinion. However, such reviews should be appropriately labeled. For example, "My Favorite Air Quality Monitors" rather than "The Best Indoor Air Quality Monitors". The title sets reader expectations for objective evaluation with consistent methodology.
On the defective display: Important clarification: we did not ship a broken device. The display issue developed during the review period—this wasn't a QC failure on our part. Hardware can fail during use (as it can with any electronic device), which is exactly why we immediately offered replacement parts, a new unit, and detailed repair instructions when we learned about it.
On the tiny display and lessons learned: We're well aware that opinions on our display vary significantly, as evidenced by this discussion. Some users love it, others find it too small. We actually have differing opinions within the AirGradient team as well. We're planning a refresh of our indoor monitor next year and are currently testing around 10 different display types—including e-ink, colour OLED, touchscreens, and others. So far, we haven't found the ideal replacement, but we're planning to involve our community later this year to gather feedback on the various options.
mind-blight|6 months ago
The only thing you're missing for me is radon detection. I just bought a house and tests came in below remediation levels, but the report showed a lot of spikes and variance. So you have any plans for a model with radon detection in the future?
jzellis|6 months ago
I'm a world class writer but I stopped doing it for a living a long time ago. Why? Because as media moved from print to online, the work was devalued. I've worked for 25 cents a word sometimes, which was pretty decent when one 1200 word piece could pay rent back then. Nowadays, writers are offered $25 per article flat with no compensation for rewrites. Staff positions pay badly for too much work but are as coveted as C suite gigs are in the tech world. Maybe more so.
So if the reviewer is staff, they might be assigned three or four reviews in a given week on top of other work. If they're freelance, they might have to take on more just to make their rent. This is because your average magazine staffer who's not management pulls about as much as a Starbucks manager, and was ever thus, unless you got in at Vanity Fair or The Atlantic back in the Before Times.
It's like when I was reviewing albums for $50 a pop: I'd get a stack of them to review and cue up track one and if I didn't get hooked pretty quick, I'd just pop in the next one.
Your device arrived damaged, which is absolutely no one's fault, but your reviewer doesn't have time or honestly impetus to give it a second chance. Not for whatever they're getting paid for that review, which is not much at all.
It's just bad luck, is all. And yes, it's not fair and, yes, you're right to complain, but it's not as simple as "tech writer lazy".
(And if anyone's response is "They accepted the job, they should do their best at it no matter how little it pays", I'm guessing you've never had to duck your landlord to try not to get evicted before the freelance check you've been hunting up for three weeks arrives. There's a reason I'd rather make a living as a mediocre coder than a very good writer these days - at its worst, the tech industry is more renumerative and stable than the publishing industry is.)
pwg|6 months ago
"is that this review is ... pretty much purely based on the personal preferences of the author."
You've found the core takeaway about nearly all "product reviews" in nearly all publications. They are almost all simply "the personal preferences of the author".
These authors have neither the time, nor the science skills, for anything even beginning to look like a rigorous scientific review, and so the "best" vs. "ok" vs. "not recommended" tags applied result because the author liked the particular shade of pink used on a trim piece on one, or liked that another one looks like the Apple computer they are using, and so forth.
But they are never based upon any objective criteria, and are never (nor ever were intended to be) reproducible in any scientific fashion.
Yet, as you say, they have "great power" to influence buying decisions on the part of folks who read their reviews.
philipwhiuk|6 months ago
Isn't this an outdoor one. Outdoor ones aren't expected to have a display because you want to check them without going outdoors.
This seems reasonable.
axus|6 months ago
Which sites and publications would you recommend, and which have are biased for financial reasons?
jjulius|6 months ago
You write:
>How can a product be penalized for a failing display when another recommended product has no display?
This is an incredibly perplexing take. A display is subjective - whether or not the customer wants one is up to the customer. What the customer does want is a functional product, so regardless of what another product's features are, if that product functions as intended and yours does not, of course it's going to be recommended over yours.
>How can an indoor monitor without CO2 sensing - essential for understanding indoor air quality - be recommended over one that includes this crucial measurement?
Again - the products without CO2 sensors functioned as intended. It is indeed accurate that CO2 is one of the most critical metrics for assessing indoor air quality, but it goes back to my previous comment - perhaps the customer is more interested in PM2.5 indoors than CO2 for a specific reason. We don't know. Ultimately, the CO2-less sensors functioned as intended, whereas yours did not.
You go on to say:
>And specifically for situations like this: How would you want us to handle it? Should companies stay quiet when review methodology breaks down? Should we be more aggressive in calling this out? Or is transparency and open discussion the right approach?
Maybe focus less on one review and more on improving the product? As another comment states, you shipped a broken product and it suggests that there's a problem with your QA process. Further, you state early on:
>Let me be clear: this was a legitimate hardware failure, and we take full responsibility for it. As soon as we learned about the issue, we immediately sent replacement parts and a new unit, including repair instructions, as repairability is one of our core differentiators.
Let's maybe hear more about that. How/why did the hardware fail? Did you examine your QA process and make any improvements to it? Highlight these steps, as well as the "core differentiator" that is your repairability, rather than asking perplexing questions about why one reviewer didn't like your product.
As an "average Joe" customer in this area, the whole response feels excessive and... whiny (driven by the fact that you don't highlight that you did, in fact, have a product on the list that was well recommended). I don't say that to be terribly mean, it's just a bit off-putting. You're not necessarily wrong about product reviews like this in general, but like... who cares? Put the effort into making a solid product, not taking umbrage with one person's opinion.
There will be more reviews, and some of them will be negative. You're not going to be able to control perception and opinion, and nobody will ever get perfect marks from everyone. Learn to be OK with that.
Edit: I just saw your response about this not being a hardware failure when shipped. Still, the general concept of my point remains - detail what you're doing to determine how this happened and prevent it in the future, rather than griping about the review process. If "transparency is how [you] operate", lemme hear the deets about this issue!
liminal|6 months ago
redbluered|6 months ago
It's not a universal product. Competitors have upsides and downsides, and different people want different things.
I think this is the time to move on, and focus on people so like and appreciate you, and not dwell on those who don't. Success brings more of both, and if you can't handle a few haters, you probably don't want to be too successful.
And reviews are imperfect, but a lot better than no reviews. Accept life isn't perfect.
Thanks for a great product and for running a company with integrity.
no_wizard|6 months ago
Jgoauh|6 months ago
nati0n|6 months ago
redbluered|6 months ago
I don't even have a preference. They both seem nearly ideal, depending on context.
dgllghr|6 months ago
lokar|6 months ago
burnt-resistor|6 months ago
This sounds like something Louis Rossmann should cover as a counter-example of mfgrs trying to do the right thing but fickle, corporate reviewers behaving in a petty, unfair manner.
luma|6 months ago
AirGradient (and several commenters here) feels like they're trying to spin their own QC problems as an indictment of modern journalism.
ahaucnx|6 months ago
I actually tried to reach out to Louis Rossmann a few times but haven’t got a response (yet).
I think what’s most interesting is that we figured out a business model based on open source hardware that’s sustainable. Thus a win-win for the manufacturer and the customer.
Repairability was actually a feature we design into the product from the start.
dgreensp|6 months ago
Wired vs. tired is literally about what’s “cool.” That’s it. It has never been rigorous about anything.
beAbU|6 months ago
There is a vanishingly small collection of youtubers that I might still trust when it comes to product reviews, and that list is shrinking.
justusthane|6 months ago
jeroenhd|6 months ago
They don't seem to be as interested in the fact their outdoor monitor was the recommended outdoor solution either.
philipwhiuk|6 months ago
That's the idea. The caveats they don't want to you to remember are left unbolded.
holografix|6 months ago
My airgradient monitor has been online for years and sending data to Prometheus reliably. I’ve been able to plot the air quality across a few climate events and the introduction of a Samsung air filter in my bedroom. It’s a good little product.
thomassmith65|6 months ago
A more professional response would just stick to the facts rather than trash the reviewer and pontificate on what is wrong with journalism today.
rjkingan|6 months ago
nothercastle|6 months ago
Also it’s a pain in the a to zero out the co2 sensor the first time.
I would probably not recommend it to someone who does not like to dabble a lot with the tech. It’s not really a it just works and it’s easy for everyone product.
Workaccount2|6 months ago
Given the AirGradient is $100 cheaper than the winning product, I think the review might have been a little harsh.
Havoc|6 months ago
captainreynolds|6 months ago
The oled display is nice, but I rarely care in realtime what the exact metrics are. I have that stored as time series stats sso I can see trends over time. Exactly like I do for metrics of production systems in SRE life.
The unit also has a series of LEDs across the top and I can read the actual status from 20’ away (which is as far as I can get without going out a window or around a corner).
One green led? Good. Two green leds? Meh. Three LEDs? They’re red now and that’s not great
Single red led in the top left in addition to any on the right? Spectrum is having yet another outage (no internet)
No LEDs? Not powered on
Reviewer was overly severe and did his readers a disservice.
It’s better imho than my Airthings wave pro, and it lets me get awesome, actionable time series data. It’s sensitive enough to show air quality taking a dive overnight with two people and multiple pets breathing in the same room (one green during the day, three or four red at night), and also to show that adding a half dozen spider plants keeps co2 well in check (consistent one green led).
And I can read the air quality from across the room without getting out of bed.
GeekFortyTwo|6 months ago
The fact that I can keep using this even if the vendor goes out of business was a major selling point, but also home assistant integration.
I highly recommend these(I have an indoor and outdoor units)
tkdb|6 months ago
Even ignoring the broken display, which I think is a red herring here (it would be relevant if this unit had a pattern of quality issues or failures indicating systematic production issues), I think that's the story here.
I appreciate the response from airgradient, assuming it's all true.
kasajian|6 months ago
voklem|6 months ago
Why not take responsibility for that instead of complaining about an honest review?
starky|6 months ago
Every volume product has failures and a single datapoint is not enough to say anything about a product's quality (good or bad). At the same time, a failure during a review is absolutely something that should be mentioned, as well as given an opportunity to test the company's RMA process. At the very least a failure like that should cause someone to look into how many other's online have similar issues.
Marsymars|6 months ago
Eh, what? I've received products that needed warranty repair/replacement from Apple, Toyota, Philips, Nintendo, Ikea, Breville, etc. (All of those examples which provided good service in repairing/replacing the product in question.)
A single data point of a broken product doesn't tell you anything of value.
glonq|6 months ago
Is there a "meta review" site like metacritic, but for products?
encom|6 months ago
turtlebits|6 months ago
encom|6 months ago
strogonoff|6 months ago
One reason is, in fact, the screen. AirVision Pro’s display is bright, but it is bright all of the time; it cannot be made dim enough to make it suitable for use in the bedroom, for example: the blueish white LCD is basically a small light fixture. Furthermore, the contents of the screen are well-readable only at a narrow angle (think when you look straight at it; putting it on top of a tall fridge or down on your windowsill makes it illegible). I would much rather prefer an e-ink display.
Second, on their website IQAir states that their air monitors are made in Europe[0]. This is a false claim. In fact, AirVision Pro is made in PRC, as it declared on the box. I would not be against a good product made in PRC, and AirVision Pro is in fact known for good cheracteristics regarding accuracy, but it seems like a dark pattern at best, and they clearly want to mislead customers.
Third, the enclosure featured a charging USB port (which is an obsolete micro USB variety incredibly hard to find cables for) that was very finicky and gave up the ghost 3 months in. The device just wouldn’t charge its battery or see any power at all thereafter, so it basically became a brick of cheap plastic for all intents and purposes. I can’t be bothered to disassemble the enclosure and try to repair it since I can’t stand the bright screen anyway and I already got the hang of air quality patterns where I live.
It did the job, sure, but if AirGradient’s PM2.5 and carbon dioxide detectors do nearly as good of a job[1] it makes it a much more compelling option for me.
Unfortunately, as of the time I last checked, AirGradient shipped to a small set of countries which did not include my area; by comparison, IQAir has a much wider coverage.
[0] You can still see the proud large “Swiss made” in the relevant section of their website (https://www.iqair.com/products/air-quality-monitors). Furthermore, if you Google the question, the LLM-generated answer suggests that
> The IQAir AirVisual Pro air quality monitor is Swiss-designed and manufactured. While IQAir is headquartered in Switzerland, their manufacturing facilities for air purifiers, including the AirVisual Pro, are located in both Switzerland and Southern Germany
which is not true.
[1] Perhaps someone can comment on that; I don’t see SenseAir sensors listed on https://www.aqmd.gov/aq-spec/evaluations/summary-table.