dippersauce
|
4 years ago
|
on: Linux kernel heap buffer overflow in fs_context.c since version 5.1
Almost every developer I have worked with considers NT and XNU to be hybrid at this point. Unfortunately, almost everything in the parent comment is incorrect.
dippersauce
|
4 years ago
|
on: Linux kernel heap buffer overflow in fs_context.c since version 5.1
>Mach, the microkernel, is not using in OS X and never has been.
The version of mach present in XNU is derived from OSFMK, which derives code from UoU's Mach 4 kernel, and from CMU's Mach 3 kernel. It contains improvements related to threading and contexts. It also definitely exists as more than an API. The code that makes up mach is present and identifiable, and thus is "in" XNU. [1]
>Mach, the API, is used (mostly due to legacy reasons).
See above. Abstractions are exposed, and traps are present, but this is not an API clone, it is mach, albeit modified from the original form. It was specifically chosen for its forward thinking benefits. Mach was not the basis for any major Apple OS prior to OS X. There were no "legacy reasons" to motivate its adoption.
>OS X has a Linux-style monolithic kernel (with legacy Mach APIs in userspace).
You do tend to see different opinions on this. Some argue macOS is a hybrid kernel, because it combines elements from a microkernel (mach), a monolithic kernel (BSD), and IOKit. A single address space is shared by the components though, which leads many to call it monolithic. [2]
1. https://github.com/apple/darwin-xnu/tree/main/osfmk
2. https://flylib.com/books/en/3.126.1.67/1/
dippersauce
|
4 years ago
|
on: Intel is reducing server chip pricing in attempt to stem the AMD tide
It's tempting to blame the user hostility on corporate shortsightedness or disfunction, but I wonder if MS has a long-term plan here?
dippersauce
|
4 years ago
|
on: Intel is reducing server chip pricing in attempt to stem the AMD tide
I vaguely remember seeing a video demonstrating a an M1 device virtualizing Windows ARM faster than it ran on Surface ARM hardware. Kind of reminds me of how an Amiga of the era could be set up to virtualize(?) Mac OS faster than contemporary hardware Mac could.
dippersauce
|
4 years ago
|
on: Intel is reducing server chip pricing in attempt to stem the AMD tide
Before M1 my only exposure to ARM has been low-power SBCs and Android devices, and the experience was mediocre in the “just works” department. Poor hardware support, and a lack of proprietary software support. Performance was also lacking. Apple’s tight integration and high-end CPUs have resulted in a vastly better experience, but I want to have more options than just macOS and MacBooks. I think we’re trending in the right direction, but it’s going to be a while before (5 years IMO) before we see anything approaching competitive to the M-series chips from major market players. If Microsoft could fix their frankly horrid x86 compatibility on aarch64 devices thing would speed along nicely I think.
dippersauce
|
4 years ago
|
on: The sexual counterrevolution is coming
This reads much like the musings of some of the religious fundamentalists I grew up around in my town. They purport that their way is the mainstream or "coming/silent majority", then shame anyone who falls outside their sexist and dehumanizing beliefs. They seek to make those people feel small and outcasted, thereby fulfilling their false beliefs in their own eyes. It's sad that someone could write something as sexist as she has, and not have even a shred of self-awareness about the irony of it.
dippersauce
|
5 years ago
|
on: The Apple M1, ARM/x86 Linux Virtualization, and Boinc
The unified architecture, combined with the speed of the modules (4266 MHz LPDDR4X), combined with the close physical proximity to the CPU cores likely all contribute to the M1's memory performance.
In Apple's own words:
“M1 also features our unified memory architecture, or UMA. M1 unifies its high‑bandwidth, low‑latency memory into a single pool within a custom package. As a result, all of the technologies in the SoC can access the same data without copying it between multiple pools of memory. This dramatically improves performance and power efficiency." [0]
[0]
https://www.apple.com/mac/m1/
dippersauce
|
5 years ago
|
on: ACE: Apple Type-C Port Controller Secrets
Part of me thinks they may intend to get rid of the physical ports altogether on the iPhone. This would serve to differentiate the "productive" iPad from iPhone, and allow them to scratch that minimalist itch they get every once in a while.
dippersauce
|
5 years ago
|
on: The M1-based MacBooks don’t even blink when the display configuration changes
I imagine the seamless transition between display configs has less to do with the performance of M1, and more to do with the difference between how it and x86 systems implement and handle graphics. I'm definitely not knowledgeable on the subject and I've also only ever used Mac computers, so I'm not sure if this is something that most or very few PCs do when changing display configurations.
dippersauce
|
5 years ago
|
on: System76 – Pangolin
It is listed right on the product page: 1 × USB 3.2 Gen 2 Type-C.
dippersauce
|
5 years ago
|
on: Apple MBPr 13 M1 cinebench R23 score
Aftermarket racked Mac Minis aren't a new idea in server spaces. Have you considered creating a cluster of them to fit into a rack? As far as price/performance goes the Mac Mini is about as good as you can get from Apple.
dippersauce
|
5 years ago
|
on: Apple MBPr 13 M1 cinebench R23 score
I really want to see how these chips perform graphically in real-life games. It seems incredible how performant they are, so much so that I nearly can't believe it.
dippersauce
|
5 years ago
|
on: Apple MBPr 13 M1 cinebench R23 score
I too had hoped that Apple might reduce the price of the new AS lineup to promote adoption by consumers, but it doesn't seem to be going that way. Depending on how these initial models sell (well I think), we may or may not see any improvement in the price/performance ratio.
dippersauce
|
5 years ago
|
on: Microsoft submits Linux kernel patches to enable complete Hyper-V on Linux
I cannot speak for the other poster, but there are several levels where consistency is significantly better. The GUI is of course the most noticeable, especially as the “iOS-ification” of macOS continues. But for a developer, the methods you interact with are more consistent across platforms and apps. Porting an app between iOS and macOS can be as simple as changing a few method names and setting a new target in XCode. For the most part one can assume things like app bundle layouts and where files will be dropped on the system. Most of this consistency lies parallel to where Apple enforces it, which comes with its own downsides.
That consistency isn’t absolute though, rough spots like the boundary between Mach and the BSD components still exist.
dippersauce
|
5 years ago
|
on: Unusual Features of the SARS-CoV-2 Genome
It doesn’t have to change your point to severely reduce a reader’s confidence in it. A kind reader would say that difference in detail is hyperbole with the intent to incite an emotional response. A less kind reader would call making up details to better suit your narrative a form of lying. Yes they covered up the train incident, and yes it was despicable and horrible they did so. That aspect is not my problem, it’s the deliberate choice of words to foster not discussion, but outrage. It makes me question your entire narrative and intent before I’ve even had a chance to consider it.
dippersauce
|
5 years ago
|
on: SoftBank set to sell UK’s Arm Holdings to Nvidia for $40B
It simply wouldn’t make sense for NVidia to maliciously impose upon ARM after acquisition, as many of the commenters are concerned. I’m not saying they won’t, but because the present situation would limit any efforts to do so long enough that it would be futile.Consider the perpetual license agreements ARM holds with companies like Apple. These companies are best positioned to resist meddling with ARM. As an example Apple will forever have access to the ARM ISA, so NVidia can’t simply stop them from using existing designs. The processors Apple uses are all custom designs anyway. If future designs were purposely kneecapped, they could just improve their currently licensed designs until a suitable alternative is produced. Hindering future processor designs won’t hurt the biggest players in the short term, and in the long term it will only drive them to the competition. NVidia could take the approach of slowly drifting new designs to greater integration with their own GPUs in such a way that alternatives would be displaced - either by favor of NVidia GPUs or by the difficulty in using alternatives. This would be obvious though, and would again drive their users away. NVidia hasn’t had great success like they have had with GPUs in any other market. I see this as an opportunity to diversify and secure their future, and they want to take it.
dippersauce
|
5 years ago
|
on: SoftBank set to sell UK’s Arm Holdings to Nvidia for $40B
Looking through the doomsayers in this thread, this is the result I believe to be most likely. If ARM goes belly up through gross mismanagement, then the laurels will be taken up by someone else. I think another aspect to consider is the perpetual license agreements ARM holds with several other businesses. I think this may muzzle Nvidia to some extent.
dippersauce
|
6 years ago
|
on: Factorization of RSA-250
I would think a primary reason for that is performance. A key that large would require a lot of entropy for initial generation, a large(r) amount of memory, as well as making encryption much more computationally expensive. I'd also be worried that with such large keys there might be greater potential for side-channel attacks.
dippersauce
|
6 years ago
|
on: New MacBook Air
I am so glad it doesn't have a Touch Bar. Nothing turned me away from the newer Pro models quite like the lack of physical keys. Sure the Touch Bar is nifty for timeline scrubbing and slider adjustment, but those are such small aspects when compared to how often the function keys are used.
dippersauce
|
6 years ago
|
on: New iPad Pro with LiDAR Scanner and trackpad support
Funny how Apple went from a mindset that mice were only intended to be accessibility peripherals in iOS, to fully support for them so quickly. I think it was only a year or so ago that someone high up at Apple said something to reaffirm that mindset.