top | item 30449263

The time has come to replace file systems

177 points| pabs3 | 4 years ago |didgets.substack.com

295 comments

order
[+] memetomancer|4 years ago|reply
Reading this article left me wondering just where the 200 million tags this guy needs are supposed to come from. Manual curation?! Automatically derived by file extension? file headers? what is the cost of opening a file, parsing its filetype, comparing against a reference, writing it to a database, etc. How is that cheaper than current indexers (which all seem to work fine btw)?

I rarely waste effort trying to remember filenames in the first place, much less needing some expensive tag curation to locate files. I simply use a bit of discipline organizing the directory structure(s). If I do ever need to actually search for something, it will be constrained to a narrow subset of directories and ignore the other 199.9 million files or whatever.

Moreover, I just don't have the problem of searching for filename fragments to begin with. Nor do I see a reasonable way to use a whole host of powerful unix techniques with a whackadoodle tiny tags filesystem. Or the need to produce a list of 20 million images in 2 seconds. What use would that be anyway? I'm not going to read a list like that - I'm going to operate on it.

Please correct me if I'm wrong, but the versatility of `find` is far more powerful if you actually need to handle/sort through that many files, and something like `fzf` probably curtails all these complaints in the first place.

[+] brigandish|4 years ago|reply
If I had a penny for every time someone on HN responds with something like this - "just become more disciplined and you don't need X" - I'd be a millionaire. Doesn't matter what it is, type systems, memory safety, a better UI for Git… there's always someone ready to chime in with how their workflow means these problems don't happen, or, even better, asking the question why would anyone need this?

Yes, why would anyone need better search or a faster, easier to organise file system? I can't think why.

[+] vicda|4 years ago|reply
Google photos' style AI driven curation maybe?

I like the idea of having a queryable filesystem, but I wouldn't want that as a complete replacement of the directory structure.

[+] bombcar|4 years ago|reply
It sounds somewhat like “gmail for files” which is … problematic because email search works well enough because it’s relatively rarely done.

I suspect a system like this would work, but the tags would eventually be used by many as a way to badly implement a hierarchy.

[+] prepend|4 years ago|reply
My response to these types of proposals is “just imagine that folders are tags and each level of hierarchy is a tag, symlink for multiple tags.”

It’s funny because the author just proposed a different, I think worse due to novelty and minimal benefit, organizing hierarchy.

I think Apple has a decent approach where their spotlight indexes very well (I use hit command+space and the first letter or two instead of navigating finder), and they support tagging files.

[+] didgetmaster|4 years ago|reply
When importing files into Didgets, the program automatically gathers information from the source file system and attaches specific tags to each file. For example, the file name is attached as a 'name' tag. Each folder name in its path is attached as a 'folder' tag. The file extension is attached as an 'extension' tag. In addition a SHA1 hash is created from the data stream and attached as a tag. You also imported them by dropping the files or folder onto a 'drop zone' on the create tab in the GUI. Any tags attached to that drop zone are also automatically attached to any file dropped on it. So dropping 100 photos on the 'My Wedding' drop zone might attach the tags 'Event = Wedding' and 'Year - 2022' to every photo. Searches for files that have a tag 'Folder = Microsoft' would find every file that had 'Microsoft' as a folder anywhere in its path.
[+] everforward|4 years ago|reply
> it will be constrained to a narrow subset of directories and ignore the other 199.9 million files or whatever.

I think this is a vastly underrated point. I am usually not interested in searching the majority of files on my filesystem. I can't remember the last time I needed to search through system files for normal computer use reasons.

I also think the author completely skips over how to handle related files. If my application needs to load a library, how does it find the file to use? If it's by name, how are name clashes handled? I suppose it could be by tag, with built-in tags, but then you won't be able to change the tags without having to change configs or the binary itself.

[+] friendzis|4 years ago|reply
The core problem at keeping the files organized is that unless you are dealing with a stream of effectively pre-tagged files, tagging/categorizing/grouping emerges after sufficient number of files arrive. Therefore organizing is proactive
[+] bborud|4 years ago|reply
What this boils down to is that he thinks a flat namespace (tags) offers advantages over hierarchical namespaces (tree). They really don't. Once your tag space grows you will start to struggle with naming, and path-like structures (nested namespaces) start to creep back in. And you are right where you started: paths.

The treatment of immutability is too superficial to make any sense of so I don't know what the author is imagining. Ted Nelson has evolved some ideas on this for decades that might be worth knowing about. Some of which have kind of come to pass (if you squint and look at how non-destructive editing tools for video and audio work, for instance). However, very little of Ted's thinking has ever been burdened by usable implementation.

The concept of having multiple references to the same file already exists. So what he proposes can be realized with existing file systems just by introducing a different naming scheme and making extensive use of sym-/hard-linking.

Yes, a lot of file systems will have terrible lookup and traversal performance, but that problem exists in an orthogonal universe and can be solved. Is, indeed solved, in some fileystems if the marketing blurb doesn't lie.

If you think about how you would realize this using existing filesystems, by organizing them differently, the concept isn't as sexy anymore. Because it doesn't really involve a lot of new stuff and you start to see the inconvenience of having to cope with both novelty and problems you didn't have before.

The problems someone like me wants solved in filsystems are entirely different, and aren't so much about filesystems as it is about how you make the functionality useful to applications.

For instance, there are filesystems that offer snapshot semantics. Including COW-snapshots. This would be useful whenever applications need to do be able to roll back changes, switch between states, do backups while being live etc. Yet I know of no language which has snapshot as part of the standard OS interface. So people generally don't write application that take full advantage of what the underlying system offers.

[+] horsawlarway|4 years ago|reply
Path based file systems take advantage of natural semantics we use for navigation. There is a wonderful overlap between how you navigate the real world, and how you navigate a hierarchical file system.

I have never (never ever ever) seen a tag based system actually work once you have large amounts of files and tags - Tags are manual, often duplicated with slight name changes or variations, hard to discover, and literally worse than a folder hierarchy for discoverability in almost every way.

Tags can be nice to have - but only if I also have a path. Otherwise they are utterly inferior.

[+] fluidcruft|4 years ago|reply
I think one of the problems is that there are many datasets where objects belong to multiple hierarchies and different hierarchies are more efficient for different tasks. For example I work with medical imaging data. Typically that gets organized around the DICOM object models which even defines multiple structures. Typically that's around the patient/encounter model and slight variations of that and data is stored in a database called a PACS, but working with PACS is extremely difficult because DICOM is optimized for clinical use cases. But there are other ways of organizing the data that are more efficient for other tasks for example for quality improvement or assurance or process monitoring. In fact different users of the data are likely to want views based on different hierarchies. Some software expects certain data layouts etc. There are some efforts to standardize file hierarchies and naming for certain tasks, but perhaps you're not doing that task. You can do things like symbolic links, but trees of symbolic links end up being super fragile in my experience and they're not particularly well supported on some operating systems.
[+] TheOtherHobbes|4 years ago|reply
I'm not sure any of this really addresses the question - which is how do people really use files.

IME I have a number of live projects which can contain various numbers of source files, images, web links, PDFs and other documents, text files, and so on.

Then there are a number of files I access regularly which may not be associated with a project (like favourite music).

Then there's a mountain of data which is just there in case I ever need it. It includes backups of old projects, documents, music and art I keep because I think it's interesting but haven't read yet, web links that are filed and then (sadly...) forgotten, and so on.

I don't know how typical this is, and it doesn't matter. Because neither a tag based nor a tree based system address the real issue - which is designing a custom file workflow that collects related references of all kinds, doesn't confuse working data with long-term storage, allows off-site backups, allows collaboration, supports versioning on demand, and also makes it easy to find things.

I suppose all of that means some kind of process API which does a lot more than file.open() and file.close().

It could be built on tags, it could be built on trees, it could be built on some combination. Or on something else entirely.

The implementation matters a lot less than a set of available features which streamline common tasks in some fairly standardised and effective way.

[+] cptskippy|4 years ago|reply
> The concept of having multiple references to the same file already exists.

It does and it's really bad IMO. The author's suggestion of unique identifiers though would introduce all sorts of new problems, primarily it would make the transparency problems of existing systems worse.

Most applications rely on the location of a file, relative or otherwise to load data (e.g. configuration). That reliance is exploited by software engineers to implement configuration swaps, event processing, and many other features. Referencing files based on UIDs, or a series of tags that aren't guaranteed to be unique or not known to be off limits to regular users, would introduce all manner of complications.

I could also see it being terribly easy to introduce bugs loading files using filtered tags. Would applications need to have relative tags to mitigate these problems? Having unique paths works both as a filter for the user and an encapsulation for a system that allows you to localize your concern. Without that encapsulation by default, you will be spending a lot more time and concern dealing with files and tags.

[+] 6510|4 years ago|reply
There is a lot of stuff that should NEVER appear in a mixed view. (Google is full of examples of that.)

Tag coulds and other meta data can still be very useful. The challenge is creating useful tags/meta data automatically. For example a time stamp for every modification or a label for every application that created, modified or loaded the file. Perhaps even the applications you were using when the file was created/modified and the file names of the files loaded into the application. Train some AI to show you files you probably want given your current activity.

[+] magicalhippo|4 years ago|reply
> And you are right where you started: paths.

A big difference is that one can naturally have multiple tags, and an entity could share tags with other entities.

Sure you can use hardlinking when it comes to files, but it's tedious and you can't have multiple files hardlinked to the same path.

[+] timetraveller26|4 years ago|reply
The problem of hierarchical file systems and data location is a really old problem that has had many implementations (I even tried building one many years ago).

Somewhat related:

Tagsistant https://news.ycombinator.com/item?id=14537650

TMSU https://news.ycombinator.com/item?id=11660492

BeOS File System https://news.ycombinator.com/item?id=17468920

TagSpaces https://news.ycombinator.com/item?id=12679597

git-annex https://news.ycombinator.com/item?id=29942796

Names should mean what, not where https://dl.acm.org/doi/10.1145/506378.506399

Unfortunaly it's not easy to get a real solution, and many people don't think that there is a problem at all (based on some comments in this thread).

Now adays I use git-annex, though it does have it's perks it seems a step in the right direction.

[+] ThePhysicist|4 years ago|reply
Systems that try to get rid off the "files & folders" abstraction of a file system tend to have much worse usability, in my opinion. I have an iPad Pro, and the lack of true file system abstractions is so painful. Every app has its own way to store and retrieve data, there's almost zero interoperability and it's super painful to copy, paste and move stuff around (I know it has gotten better but it's still so much worse than on any desktop OS).

I'm all for enriching the concept of a file system with additional meta-data (in fact many files do that) but I don't think that needs to happen in the file system itself. For example, software like Picasa leveraged meta-data contained in files to provide a new way of interacting with large number of photos. The author basically proposes to put such functionality directly into the file system, but I'm really not sure if that's a good idea. Right now it's easy to move files between different systems, e.g. from Mac to Windows or Linux. If file systems become meta-data management databases that will become much more difficult.

[+] slightwinder|4 years ago|reply
> Systems that try to get rid off the "files & folders" abstraction of a file system tend to have much worse usability,

IMHO it's because those systems just simplify, but don't move very deep in the space they opened up. If you don't offer power, then it's irrelevant which system you offer, they will all suck fast.

> Every app has its own way to store and retrieve data, there's almost zero interoperability and it's super painful to copy, paste and move stuff around (I know it has gotten better but it's still so much worse than on any desktop OS).

Which is kind of a surprise, I would think Apple would be interested to unify that space and offer a good user experience.

> Right now it's easy to move files between different systems, e.g. from Mac to Windows or Linux. If file systems become meta-data management databases that will become much more difficult.

Theoretically, it could be solved by using a meta-file-container. Something like a tar-container, which contains a file for meta-data and the actual content. We have this with specialized container-formats in media and office-filetypes. Making a universal format which would work equally well for any kind of file type could solve this problem of interoperability. This would even open up ways to improve files without changing them directly. Like adding subtitles or notes to a file, by just adding it to the container, not the file itself.

[+] kalleboo|4 years ago|reply
macOS has so much of the prep work for this kind of thing, but Apple has completely dropped the ball on the UI.

Spotlight parses and indexes all the existing metadata in your files (music ID3 tags, photo EXIF tags, etc - run `mdls` on a file in a terminal to see all the stuff it's extracted) and this could all be used to make some pretty powerful UIs, but all Apple has done is made one very handy universal search UI, and then a very poorly designed specific search UI, and then made stored searches (which are also useful, but limited in practicality by how bad the UI to create them is)

[+] mikewarot|4 years ago|reply
I have 394,175 photos and videos that I have personally taken since 1997. They are organized by a simple hierarchical system in 5,356 folders.

D:\masterarchive\source\YYYY\YYYYMMDD\photo file name

If I want to find a person, in a photo, I've used Google Picasa (when it was an offline product) and lately digiKam to do face matching, and tagging them with IPTC metadata tags in the photo files. Thus they survive moves across filesystems, etc.

I'm up for seeing alternatives, but there's a very high bar to clear here. People have been using directories and file storage since the middle ages.

[+] WesolyKubeczek|4 years ago|reply
Listen here, a hot take incoming.

There are two absolute genius inventions in computers so good and timeless that the sliced bread pales in comparison like a stupid troll comment on HN.

1. The keyboard

2. The hierarchical filesystem

Everything and anything else in input devices and data storage builds on these and the best solutions ever always are going to augment these, never replace them.

A good tag system will build on top of a filesystem and coexist with it, and offer value like stupidly fast search. Anything else will be lucky to survive a weekend of dubious fame on twitter, or up to a few months if you actively market it.

[+] pontifier|4 years ago|reply
I've never seen a way to organize files that feels like it prioritizes files based on how much they mean to the user.

By that I mean pictures I take, papers I write, things I really wouldn't want to lose, vs 10,000 random system files.

For downloaded files sometimes the history of when it was downloaded, and from where, is almost as important as the contents.

Backups from old computers, and old phones start to pile up, and the chaos of trying to find that picture you took 3 phones ago, or the notes you took, or the recording you made, or the pdf you downloaded, or that code you wrote, or that map you made, is a real pain.

Digital clutter is one of my biggest problems.

I really need a good way to deduplicate and organize ALL my digital stuff. Tags might play a role, but I don't think they quite solve the problem.

[+] ho_schi|4 years ago|reply
It feels like the author didn't like tree based file structures? The provided software in the screencast remembers me of iTunes. Which I dismiss because it doesn't provide a logical *tree like structure*. And it is not a filesystem replacement either, it is a database which adds a lot complexity and hides actually data. Furthermore this assumes someone is maintaining the metadata (remember the MP3-Taggers?) instead of the files. Metadata itself is useful but the creator of the file should add it not the user. Regarding file manipulation the proven answer are file permissions but I think CGROUPs are the flexibel, modern approach.

Because I'm seeing "Windows Explorer" in background:

Windows Explorer has degraded in recent years, it is even hard to open your "home directory" and the UI is confusing. Look at the one from NT 4.0 which was much more close the fulfill the task.

And Apple:

I think the regret nowadays the howl iTunes? But instead the pushing hard on apps which contain the data. Now you have to look always into a single app and uses it facilities to retrieve a file. Android failed here, too. But using iOS is hard.

[+] ubermonkey|4 years ago|reply
>it is even hard to open your "home directory"

Windows clearly doesn't want people to GET to their home directory, for some reason. That seems goofy. If people don't understand that $user contains the rest of those folders (Documents, Downloads, Pictures, etc) they'll never be able to navigate on their own. That's bad.

In a sane tool that features an address bar, clicking any given directory would show, in the address bar, the path to that location. WinExp only rarely does this. If you click on, say, Desktop, it shows you This PC > Desktop, implying a relationship that is incorrect. Getting to your home folder without typing requires you to start with C: and drill down, which is objectively insane.

Even MORE bananas is that if you start at C: and drill down to Desktop, you DO get the correct path in the address bar. But if you then make a WinExp shortcut of that location, it goes back to the other behavior. WTF.

[+] olliej|4 years ago|reply
The idea of building “indexing” into the file system means either the file system directly understands all file types, ignores those it doesn’t understand (thus requiring an out of fs indexer), or requires the file system itself to be able to dynamically load logic to handle different file types. By the time you get to the last one all you’ve done is build spotlight(or the ms equivalent) into your file system, so now you’ve got all the cost of the indexer only now it’s in the process reading and writing the raw bits, and of course doesn’t index contents of any other filesystem (so you’re still going to be running an indexer).

I also don’t understand how a filesystem is going to store this data in such a meaningfully different way that it uses less space and/or is faster to index.

[+] fxtentacle|4 years ago|reply
I don't think file systems will be replaced anytime soon because of psychology. The human mind remembers things best by attaching them to a real or virtual location. That's how all the memory experts do it, they construct a virtual house in their mind. Virtual rooms, shelves, boxes, and folders are no different. So if anything, I'd give the Filesystem different folder icons based on their depth to reinforce this similarity with the real world.

Also, the article seems to use strawman arguments. Nobody needs to remember the exact image file extensions. You just click on the "search for images group" in windows and it'll search all image file extensions for you.

In effect, tags are already there. It's just that they are automatically generated.

[+] enkrs|4 years ago|reply
I like the idea, but I think it will not change the world.

Filesystems have already been reduced to storage mechanisms for systems not people.

People just don’t organize files anymore. And that’s a good thing.

Most employees in relativley fresh organizations keep their files in OneDrive and Dropbox. 10..15 folders of random names and good search function that returns recent files on top. The older files just lie there, not botheting anyone because nobody is looking.

Files from other departments are found via links in Mail and Slack search - not as attachments to Email.

People launch Powerpoint (online) and use the recent files menu instead of browsing from the ”C: drive”

To rethink storage ignoring that people don’t store files anymore is futile. It’s nice for organized geeks (like me), but in general file organization is a thing of the past.

[+] tremon|4 years ago|reply
People launch Powerpoint (online) and use the recent files menu instead of browsing from the ”C: drive”

Yes, I do that too. But that's because I have to, not because I want to. Onedrive, Sharepoint (and I guess Dropbox too) are impossible to navigate otherwise, so yes, even people that understand hierarchies are forced to use an application's LRU list to find old documents.

That's not a sustainable situation. I foresee huge storage bills for organisations because they won't be able to afford to curate their growing terabytes of disorganized file storage.

[+] theamk|4 years ago|reply
The tag-based location of user documents is nice, but why do people want to put it into filesystem layer? This seems like a bad fit.

- Tags in filesystem index too much. For example, if there is a program directory which happened to contain a .jpeg file, it should not be shown to user. Neither should user see files from browser's cache folder.

- Tags in filesystem index too little. Filesystems are device-specific, and a lot of times, you want to index across all devices in system. And maybe some files have no associated device at all, because they were transparently offloaded to cloud?

I think a much better fix would be to have an index database as a separate file, and filesystem providing a general support for it. Author says that the separate indexers might become out of sync or are slow -- but this is not inherent property of indexers, but rather the limitations of the filesystem design. So let's make filesystems more index-friendly:

- Make it fast & easy to detect individual file changes: every file has auto-updateable change time that user cannot mess with (linux already does this). Even nicer would be an extra timestamp which updates when content changes (not metadata) -- together with inode, this can detect renames easily and quickly.

- Make it fast & easy to detect past filesystem changes: There is a way to quickly find all changes made to the disk since some past moment: Merkle hash of directory + all contents is ideal (like ZFS maintains internally), or failing that, NTFS-style change journals can work too.

- Make it fast & easy to detect present filesystem changes: have powerful notification API that can detect all changes on disk. Perhaps also include first few kilobytes written to file for performance (so that file scanners do not have to open every just-written file)?

- Make it possible to "claim" subdirectory: something like a common attribute that advices common file browsers to avoid modifying the content. This way a software can use automatically generated names, and not worry about users just copying random files into arbitrary locations of structured hierarchy. (This should be bypassable by user with appropriate warnings -- this is UX mechanism, not security one)

- Perhaps a standard on how to store tags? All modern filesystems have attribute support, but AFAIK there is no clear consensus on how exactly it'd store the tags.

This way, one could have general tagging system, and winamp music database, and photo management app all looking at the same data and working together.

[+] syntheweave|4 years ago|reply
I think most of the friction of filesystems stems from wanting application layer features in a construct that has always been decidedly "systems" and just holds data, while applications themselves have gone the route of appending more and more features into files. Since there's no middle layer, files have increasingly become the Armstrongian "gorilla holding the jungle and a banana", often duplicating state and metadata to get the job done. And it terrifies me when development tools, as they so often do these days, spray around files, because it usually results in broken dependencies somewhere down the line.

Another approach that could get at addressing this is to define frontend protocols to filesystems that do targeted, application-y things. This is done in informal vernacular often enough through things like naming conventions, but what we could really aim for is a specification that's a "form-filler" for each category, that consumes various document and data types and produces the desired kinds of metadata.

The difference between that and doing it as an indexer is that it could be seen in a bidirectional intermediation sense: if the protocol understands all the relevant formats well enough to parse them, it doesn't have to also hold a file, it could simply use internal structures and generate the file representation on demand if needed. But to do it properly these structures would have to have similar security and integrity guarantees to our current filesystems. And exposing a frontend like this does add surface area, with the silver lining of "if it's pushed down the stack, then fewer application coders will have to roll their own terrible version of this functionality".

[+] em-bee|4 years ago|reply
The tag-based location of user documents is nice, but why do people want to put it into filesystem layer?

i don't know if the filesystem layer is the best place, but i don't want to loose the tags when copying or moving files.

so somehow this metadata needs to be associated with the file, but, it also should not be in the binary stream of the file. EXIF in images and other similar metadata systems are nice, but any change there invalidates checksums or other attempts to identify changes in the actual file content. (i want to easily be able to see if two images are identical even if they have different metadata, which i can now only do with specialized tools)

[+] bitwize|4 years ago|reply
Another thing you'll want for database-centric file stores, that should be table stakes for every desktop OS, is Amiga style datatypes. That is, allow applications to register readers and writers for their file formats. That will help the database parse files for important metadata.
[+] d--b|4 years ago|reply
This is one of these ideas that always float around. Files should be located by tags, not folders. Or file systems should be relational databases or file systems shouldn’t exist at all, etc.

But the fact is people are used to files and folders. Tools are built upon files and folders so changing everything is extremely difficult.

Plus all the tools that have tried to do things differently proved to be a pain:

1. Gmail tags: does anyone use the tag any diffrently from folder/file. Having multiple tags on an email means it’ll show up everywhere

2. Iphones didn’t have files, but it was so inconvenient it was added back

3. Microsoft relational file system was never released (i think)

[+] kevincox|4 years ago|reply
For GMail I often used "non-folder tags" for example I would tag emails based on the to address and they were clearly marked in my inbox. Or I would tag certian types of emails so that I can review them later. For example SMS would be tagged, but I would read them in my inbox.

I just really wish GMail archiving was a tag. For example I get my video subscriptions into a tag called "Videos" but when I am done I remove the tag and that info was lost. It would be nice if Archiving was just adding an "Archived" tag and it was excluded from tag views by default. That way archiving doesn't forget all the tags. The only workaround I am aware of is making two tags for everything like Videos and Videos-Archive. Apply both in filters then just remove one once you are "done" with them.

Folders have the same problem. Of course trash systems work around this by explicitly recording the original location.

[+] barrkel|4 years ago|reply
Tagging is work. Fiddly work that's surprisingly costly in effort if it's not trivially automatable stuff like time and date, location, application, and so on.

Naming is hard work, but tagging means creating and choosing shared names all the time, with the pressure that the combination needs to be reasonably unique, otherwise you won't find stuff.

Tagging is also fiddly if you don't have a really good bulk action UI. You can think of the user-controlled paths in a hierarchy as tags, and moving files is the action of untagging and tagging. By moving 100 files from one directory nested three/levels/deep to another, you are removing 300 "tags" and adding 300 different "tags". And you can rename the "tags". A single click and drag, 600 actions, and you can see the before and after trivially, and undo trivially too (at least in Windows).

Tagging is more useful for ad-hoc "favourite" lists, and the occasional cross-reference (but it's work to hunt down the elements in the xref).

[+] sys_64738|4 years ago|reply
If you're running Windows then install Everything.
[+] whywhywhywhy|4 years ago|reply
Boggles the mind how Windows search basically just doesn’t work at this point compared to what you have on other operating systems.

Feels like I can’t even search for a certain file type in a folder.

Really frustrating that Apple, the only company to truly master OS search doesn’t seem that interested in making the type of OS that has files anymore.

[+] Stratoscope|4 years ago|reply
Yep, Everything is a game changer. I don't worry much about where a file is any more, I just worry about giving it a good filename. Then I will always be able to find it, wherever it is.

Be sure to set a hotkey for it. I use Ctrl+Shift+Spacebar since it didn't seem to conflict with anything else.

Of course before you can use Everything, you have to find Everything. Here's where:

https://www.voidtools.com/

[+] _dain_|4 years ago|reply
Everything is a lovely tool, but I'm continually amazed that it should have to exist at all. Why is the Windows built-in search so atrocious? You literally cannot use it as part of a getting-anything-done-at-all workflow. And it keeps getting worse with every update? Why would I want to mix Bing search results with stuff from my filesystem?
[+] oliwary|4 years ago|reply
This has completely changed the way I use files. I rarely ever open the explorer to navigate to a folder, but instead open everything to search for a file and then instantly jump to the file location. Naming files well becomes much more important than where they are located.
[+] bobsmooth|4 years ago|reply
Have it pinned to my taskbar. There's also Windows PowerToys that has a quick launcher.
[+] istillwritecode|4 years ago|reply
It's time to wrestle control over files away from users. /S
[+] UltraViolence|4 years ago|reply
What's wrong with adding metadata to each file and indexing that? I thought this was essentially a solved problem.

Also, the OP solution merely sounds like a slightly altered filesystem. I thought he was going to propose something akin to WinFS, Microsoft's ploy to merge an SQL database with a filesystem, but it turned out to be a dud.

[+] beardog|4 years ago|reply
This looks pretty neat (though i will not easily give up files). The author seems pretty frustrated in another post that few people are interested. I am willing to give it a look over at the design+play with it, but i struggled to even find the website, and on said website there is no way to download the software. I only found a sample data archive.

https://didgets.substack.com/p/what-is-wrong-with-you-people...

[+] theamk|4 years ago|reply
In the comments here, people named dozens of similar systems. I am sure that in the previous discussions (that must have prompted that frustrated post) the same happened. Author must have read them.. and then went to write:

"I have invented an entirely new way to store and manage all kinds of data"

There are no references to other systems, no comparisons. Did he just ignore all the previous work? Having "digets vs X" table and a section why this time it would work will do a great thing to this projects' credibility.

[+] didgetmaster|4 years ago|reply
I apologize for the missing download file (DidgetsBeta.zip) on the website www.Didgets.com It seems the latest upload failed, but I fixed it.
[+] projektfu|4 years ago|reply
Data organization is constrained by the worst system, because of interoperability. How do you move these tags across the internet through systems that don’t understand them?

For example, the Mac had “file types” and “creators” as separate metadata since the beginning. Because type wasn’t encoded in the filename, mistakes weren’t made that accidentally changed the type and you didn’t have multiple files of the same name differing only by extension. The file always opened in its creator but power users could easily change the creator. To make a successful round trip to another system, the file would need to be given the right extension and then another program would need to reassign the file type and creator on reentry. If you didn’t do it right, people would complain that they couldn’t open the document.

In addition, experience shows that organization must happen automatically or people will just let it do whatever. At this point, most users probably have all their documents in one folder and all their downloads in another. If they weren’t indexed automatically, they’d just give up and say they don’t have the documents anymore.

Come up with an intelligent way to organize automatically and it will be a real revolution. I’d like to be able to find that photo I saw a few weeks ago when I need it. I want all the documents that are similar to the one I found that isn’t the exact version I wanted. I want all the photos taken in Brazil as well as unlabeled photos that might be Brazil. I want the EPS version I have of this jpg logo

[+] joshu|4 years ago|reply
I'm a huge fan of tags (as I promulgated the idea in the first place in the early 2000s) but they have a bunch of problems.

They're better understood as a memory extension system rather than a sole filing system. The idea being that it improves recall of objects if you add some attributes when saving as you are likely to use some of the same attributes when recalling.

But the vast majority of objects on a filesystem are mechanically generated and never touched by the human using it (assuming a sole user.)

The model gets much more complicated when many users are interacting with the same system.

As noted elsewhere, the flat namespace gets cluttered very quickly. I do think that there is use for a hierarchical separator since many times objects are fully inside some other concept. And when looking at massive userbases creating tags for memory, there is a distinct ordering of generic to specific when creating tags in order.

Also, filesystems also allow a bunch of workflow that aren't completely obvious under tagging. For example, a business might copy their template folder and rename it for a new customer, and inside it it has a bunch of documents with the same names. I think this a bit like having a bunch of objects (in the programming sense) thus creating things that all have the same method names (except now they are files, for example)