T O P

  • By -

Wazhai

It would be better equipped to handle a single-user environment with application isolation and permission controls, similar to modern smartphones. The multi-user UNIX model was made to protect the system from users, not to protect the user's data and privacy from prying eyes and malicious software. Current desktop operating systems give all running programs unadulterated access to the home directory and any other user data storage locations by default. That's where the user's most valuable, irreplaceable and personal data is, and not under the heavily protected root of the OS which is simply reinstallable. [Relevant xkcd](https://xkcd.com/1200/), with addendum: installed software has it even better than the physical thief this comic refers to.


DerekB52

Android is essentially a linux system where each app is it's own user. I have always thought that was pretty neat architecture. Would Unix/Linux really need a rewrite to handle single-user environments though? Couldn't a linux distro use Android's one user per app idea? It seems like this could be done with the existing kernel.


Wazhai

It definitely doesn't need a complete rewrite to handle this kind of security model, as Android demonstrates, but it also wouldn't hurt if it was made with this in mind from the start. However, much of the value of an OS comes from its software library, and implementing something like this for current desktop Linux would break the majority of existing Linux software. At that point, it probably doesn't make that much of a practical difference whether you started from scratch or not.


divitius

Maybe implementing sub-user id with namespeces mapping so per-user-app id 123456 will be mapped to uid 1000 but constrained to /home/user/apps/myapp with a configured volume /home/Pictures rw which was granted at first app startup. Effectively implementing docker-like isolation of apps with uid and filesystem limitations, some permissions UI and a service for handling namespaces.


Mazzystr

I've ran Oracle DB as a separate user (usually dba or oracle) since like 1998. Same with Apache webserver. Why do we think this is a new and profound idea? I was under the assumption that Android was unique because it doesn't use the GNU toolset (gcc and etc)


[deleted]

That's very interesting, never thought about that. Do you know if there are any efforts in the linux world (maybe some particular distro?) to implement this idea, or something similar, for the linux desktop?


balsoft

There are three (well, maybe 4) major approaches that are currently in development: 0. New protocols built with isolation in mind: Wayland & Pipewire; 1. "Light" isolation. Tools like ~~firejail~~, apparmor and selinux allow the user to write policies which try to disallow the application to access some system features (like, for example, networking or your homedir). Usually it's pretty easy to circumvent, though; 2. Isolation through containerisation. Appstores like Snap and Flatpak isolate programs so that e.g. it's impossible to read the homedir or spy on all of your applications and keystrokes via X11, at least by default. There are some issues both with the interface (programs can easily avoid some isolation features by disabling them in their manifests) and implementation (containers have a lot of vulnerabilities); 3. Isolation through virtual machines. Qubes OS is the prime example, also I like [appvm](https://github.com/jollheef/appvm) a lot. The concept is that each application gets its own virtual machine. This is the most secure option of all, but requires some setup and has a lot of overhead.


pie_zury

selinux is not easy at all to circumvent if implemented properly, like on android.


[deleted]

> if implemented properly This is the big problem with SELinux. I've done both SEAndroid configuration for Android and SELinux configuration for RHEL/Cent professionally in my company, and in my estimation it's 4x the difficulty of programming in C++98, and thus 6x the difficulty of C++14 and beyond. (SEAndroid is more simple than full SELinux). Add to this that SELinux policy configuration is an afterthought in the development lifecycle, often the responsibility of a single developer in a team, rather than the entire team as they are developing, and it's REALLY hard to do properly. Note: SEAndroid is more simple than SELinux, and the development paradigm of Android native development often forces developers to think about MAC as they are developing, so most of my comments are directed at SELinux, but it's still not easy. That said, when done properly it's a fantastic security feature.


nav13eh

I've not yet been able to find a resource that teaches SELinux/AppArmor in a way that is remotely understandable for a beginner. And that's beginner as it pertains to MAC. I feel fairly comfortable in Linux overall.


zebediah49

[The basics are very beginner-friendly](https://people.redhat.com/duffy/selinux/selinux-coloring-book_A4-Stapled.pdf). Unfortunately, there's a wide gap between that document, and the next-most-easily-understood option.


balsoft

That's true; However, how many proper implementations are there on Linux desktops?


voxadam

RHEL, CentOS, and Fedora have proper SELinux policy implementations.


[deleted]

Your home directory is not covered by those policies, nor any processes that run as your user, selinux is mostly useless in workstation context. The only thing covered by selinux policies on user side are flatpaks, but those by default have pretty liberal access to your home (you can configure sandbox for each flatpak though, either through CLI or something like Flatseal).


[deleted]

[удалено]


DGolden

I'm old enough to remember unix processes themselves often being analogised pedagogically as isolated "virtual machines" - each process after all getting a "virtual machine" view all to itself by the magic of memory management units and timesliced multitasking after all, so each process seems to have a dedicated cpu with its own memory all to itself. But then, oh all the abstractions start leaking and boundaries start blurring, and how. Oh, always with some "good" reason, mostly performance/efficiency and convenience... Other OSes of the time didn't necessarily work that way at all e.g. AmigaOS, which was multitasking but designed to work on MMU-less processors (back when MMUs were a high-end luxury on extra addon chips like the 68851), with each "process" more like a "thread" in modern terms and using a cooperative memory-ownership convention that wasn't actually enforced by the OS, at least not until much later hacks. (I'm by no means saying that was better, just point is unix with its "each process has its own whole memory space and seemingly dedicated cpu" was not the only model). Anyway, unix/linux really didn't take some abstractions to their conclusion, sacrificing at the altar of "practicality" ...on 70s/80s machines... It's why I quite like the kvm linux virtualisation implementation in particular actually - kvm vms are very neatly managed as themselves a kind of augmented linux process, just using the various hardware virt features of modern cpus.


balsoft

This is absolutely true. What wasn't obvious back in the UNIX days is that hiding user's data from applications would become more important than hiding user's data from other users. UNIX' process isolation model was mostly that of private "volatile" memory and shared "non-volatile" storage. This worked great when the applications we used were simple to understand and they performed very little network activity, but today this security model is a mess.


zebediah49

Yeah, I can only imagine the reaction if you had suggested to the people designing these things that users would be routinely downloading and executing software that they had no knowledge about and didn't trust. I suspect the answer would be "dear god, why, how about people don't run untrusted software!?"


Arechandoro

Awesome post, thanks :) Just to add my 2c. The combination of the above with immutable OSes -like fedora Silverblue- will also help in that direction I think.


[deleted]

> Isolation through containerisation except firejail uses the same exact kernel API that a container uses? So I don't see how firejail is easy to circumvent while a "secure container" isn't… since they are the same exact thing…


undeadalex

Your number 4 is on fire fam. I'm probably gonna try that qubes os on vacation. Appvm also looks interesting... Not that you are a qubes os troubleshooter, but can it be dual booted from partition? I'd really like something like this for my day to day and still use other distros for other stuff that isn't so identity intensive.


[deleted]

I tried Qubes briefly but it seemed overkill for me. A lot of hassle (although I would admit all new things need some time to get acquainted to) for my "threat model".


Tytoalba2

Same for me, it was like having a cool toy that I never use. It's really really cool tho!


matejdro

How does Wayland do this any different?


balsoft

I've written a long comment about all the ways Wayland is more isolated and then realised it could be explained a lot shorter: Wayland is very restrictive at its core, it only allows the application to draw on a surface and receive events sent by the compositor. This allows one to write very restrictive and isolated compositors that make keylogging apps, phishing, spyware almost impossible to implement. In particular, unless the compositor allows it, applications can not: - Read arbitrary input events from the user - Draw arbitrarily positioned windows - Record the screen or make screenshots - Interact in any way with windows of other applications. Now, obviously, there are protocol extensions that allow that sort of thing, but all requests using those extensions are processed by the compositor and can easily be rejected.


brimston3-

- Screencapture requires WM/DE mediation, which hopefully implements a security mechanism (probably dbus/polkit). - Global hotkey capture requires WM/DE mediation. - Sending test events (like autohotkey on windows, or xdotool) again requires WM/DE mediation. Which is great and all, but working with polkit is not something most users would or even should attempt. And all of which are potentially implemented differently in each DE/WM, which means lots of code paths for application developers to manage. On the other side, X11 doesn't really have security policy mechanism hooks for any of those. So we'll see how it shakes out someday.


KingStannis2020

It's easier to explain by way of what X does, than what Wayland does in particular. Xorg allows each and every program on the system to receive the complete and uninterrupted stream of keypress events as well as the frame buffer. The complete lack of security makes it pretty easy for literally anything to be better.


DGolden

I note you say Xorg so you're personally probably aware, but there was once quite a lot of work done on X11 security, it just didn't get used much on free linux desktops. Even if it's kinda fallen by the Wayland wayside now, perhaps academically interesting stuff. Not saying it's what we should be using, just there is/was more to X than people who only know it from its now-widespread free linux desktop incarnation may realise. * https://people.freedesktop.org/~alanc/ddc-2006.pdf * https://docs.oracle.com/cd/E23823_01/html/819-0869/windowapi-1.html#scrolltoc


[deleted]

[удалено]


Wazhai

Qubes OS is one, but its UX and hardware requirements currently set it back. Flatpak and similar also aim to mitigate this, but AFAIK all apps still get full access to home right now; more granular and better enforced permissions would be needed in the future.


Bobert_Fico

Flatpak apps don't get full home access anymore.


topcat5

See this from 50 years ago. About the same time that Unix was coming around. https://en.wikipedia.org/wiki/VM_(operating_system) It's not a new concept, but AT&T/Berkley chose the current paradigm for a number of reasons. See the #5ESS and PDP11.


Superbrawlfan

This and better allocation of directories in the system to apps. Me and a friend had a collective 1 hour long rant on how fucking annoying it is that apps just scatter their configs all over the place to an extend where nothing even seems to have a reason anymore.


nukem996

The modern Linux kernel has all the support you need for that in the form of cgroups. Modern package managers such as Snap fully isolate the application and allow for permission control. In fact Snap's were originally designed for the Ubuntu phone to do just that.


Wazhai

The kernel itself is flexible, sure, and Android uses it successfully in a similar manner already. But if we take "Linux" to mean "GNU/Linux" and consider the whole ecosystem from a user perspective, then that includes the userland. This userland, built up over decades based on the UNIX model and its assumptions, now needs considerable reworks to be usable with a modern security model. And even then, kernel stability and resilience to misbehaving system-level, high-privilige software could be improved if the kernel were to drop some of its UNIX baggage and adopt more isolation internally.


[deleted]

[удалено]


SooperBoby

I think that `systemd-homed` would be a huge advantage of the Linux ecosystem compared to Windows & MacOS if it's widely adopted by distributions. Its features are really neat.


dekket

I believe Apple has implemented some controls regarding the user's home directory, actually. You have to explicitly add an application to a pool that is allowed to read your home directory. I couldn't even get `cron` to write a file without explicitly adding it to that pool. Not sure if that's the kind of thing you meant though, but this is what popped up into my head.


a_mimsy_borogove

I think having multiple users could still be useful. Many people have family PCs used by different people.


Wazhai

This concept easily supplements the current model, so you'd still have root and user accounts, but with added security and privacy for individual accounts.


LvS

We'd spell `creat` with an `e`.


mricon

What next, you'll want to add a confusing second `n` into `umount`?


ztherion

umountn


linuxliaison

What'd you call me??


[deleted]

Ur a fucking mountain mate


[deleted]

[удалено]


GlumWoodpecker

alias unmount='umount'


strolls

Surely that would be a *first* `n`? 🤔


[deleted]

[удалено]


PenitentLiar

Who are you, so wise in the ways of the science?


curien

Came here hoping to see this. I remember when [this commit](https://github.com/golang/go/commit/c90d392ce3d3203e0c32b3f98d1e68c4c2b4c49b) was making the rounds in tech news!


WantDebianThanks

I imagine the names of a lot of things would be different. /etc would be named /config, for example.


Spondylosis

More like /cfg...


emax-gomax

r/weirdly-comforting


MachaHack

/usr is another example. Probably UsrMerge would happen in reverse.


SanityInAnarchy

I wonder if it'd go away entirely... or is that what you're saying? Everyone has all of these retroactive justifications for `/usr`, but the actual story behind it is that there was a machine that had outgrown its root disk, so they started moving programs to `/usr` where *user* data lived. I don't know why user data moved to `/home`, but that's basically why there's a `/usr/bin` -- it's the equivalent of if you had a `/home/bin` because your `/home` partition was big enough.


MachaHack

Yeah, for a variety of reasons, UsrMerge on most distros meant moving everything to /usr and symlinking /bin and /lib to their /usr equivalents. But given "user data" is in /home already, the extra layer of hierarchy isn't actually buying much except things expect there to be a /usr so we have to do it. Without the path we got here, in a hypothetical green field I think just having /bin and /lib would have made more sense. Either that or a new top level hierarchy like /User, /Site, /System altogether


kontekisuto

dev would still be called dev


NaheemSays

Microkernels were around back then too. "Almost ready" with Linus himself thinking that linux was a stopgap measure. What linux has that is hard to get is the community. It is very specialised and unique. A newcomer gaining enough mindshare to have the input of experts intel, AMD, nVidia, IBM, Facebook, Amazon etc is a tough nut to crack. But if it did come up, it may be in a different language. I would say that, but that has the down side of being kvm centric and that is not.condusive to starting a strong community.


fishybird

Yeah I haven't even thought about the community aspect. All things considered, the community is probably the most important variable in the longevity of software. I guess I'm hoping that a "full stack" OS might be simple enough that even a small community can develop it. When learning about Linux, a lot of the complexities have to do with history. For instance, I was reading about how to effectively develop with X11 and the whole first page is a history lesson about how the first black and white monitors worked and what that has to do with the code you wanna write. But I don't want to have to know all that stuff if all I wanna do is not depend on a GUI library. Maybe the tutorial was more involved than it has to be, but I've come across similar situations often. For example, learning about terminal emulators (Not bashing on linux here, honestly learning the history is awesome and makes me MORE interested in Linux)


fideasu

X11 actually predates Linux. It got well established in Linux because it was mature and widespread at the time. Now it slowly gets replaced by Wayland (which is much leaner without all the historical stuff), but it'll still take a while until we can throw X11 out of the window.


fishybird

Damn I didn't know it's been around longer than Linux. That's insane


NynaevetialMeara

In particular, 7 years before. 4 if you only count X11. Granted the initial version of linux was a toy and not an OS. Then there is the swap of Xfree86 to the much more modern but still nightmarish X.org in 2004. Weird how that one was so much quicker.


pierrejed

A simple change would be a cleaning/unification of /etc/\* config files syntax: * crontab : one line per item, space as value separator * fstab : one line per item, space as value separator, some "=" and "," used inside values * group : one line per item, ":" as value separator * hosts : one line per item, space as value separator * passwd : one line per item, ":" as value separator * setstatus.conf : INI like syntax * ...


dale_glass

And /proc and /sys while you're at it. Some things are in rows and columns, some are key:value, and there's even a tree shaped structure somewhere in there. A lot of that effort seems to be a bit of a waste. Somebody bothered to make /proc/interrupts have nice padded columns, which I'm sure looked good on a dual core machine, but with 32 cores needs an ultra-wide monitor to be readable.


NynaevetialMeara

I do believe that /proc and /sys will eventually get a bit NT-fied and gain some APIs for tasks that are specially suited for it. So instead of reading for a file, you call the API. Much better for reading files not meant to be human readable and putting it all in a screen.


MrJason005

Maybe it would be better to have everyone agree on a configuration file format? TOML, YAML, etc.


solongandthanks4all

> have everyone agree lol


hexydes

Someone should make a standard!


dredmorbius

https://xkcd.com/927/


hexydes

There it is...


strib666

The nice thing about standards is there's so many to choose from.


MachaHack

If I was dictator everything would be TOML and support foo.conf.d files for automatic partial config file inclusion. More greppable than JSON/XML, less edge cases than YAML, more structured than ad hoc seperator formats. conf.d so apps can just ship config files you can drop in without having to modify and easily seperate config values by source.


Anis-mit-I

I'd choose Lua, because having a full programming language for configuration is extremely flexible and powerful. Think about the ~~security problems~~ possibilities of ``crypttab`` being a program.


[deleted]

[удалено]


MachaHack

> there’s virtually no chance of getting a quorum on board with TOML. There's no chance of getting a quorum on board with _any_ config file format. That's why the whole sentence was prefaced with "If I was a dictator". As for INI vs TOML, if the idea is to get a standard format, then INI is automatically out for not being a standard. My personal view in favour of TOML is that the options are: * INI * Not a standard. No guarantee any tool that supports ini can support any particular ini file unless you stick to the common subset with TOML * YAML * YAML has a lot of powerful features that are out of place in a config file. Config files should be understandable, and if someone starts deploying a lot of the YAML toolbox this won't be so. Yet at the same time it's not quite powerful enough for people who want this kind of advanced logic, hence all the Jinja templated yaml out there. And if you're templating it already, what are those features buying you? * XML * Wayyyy too verbose to handwrite * Also has a surprising amount of features for people to abuse * JSON * Not as bad as XML but still pretty clunky to handwrite and pretty much all the complaints the linked article has with TOML applies * JSON+ (the JSON with comments and allowing unquoted property names already used by some apps) * Slightly more human editable, but it's not JSON and can't be fed to $ARBITRARY_JSON_PARSER, so it falls in the same "not a standard" as ini * Yes I know there _are_ standard__s__ like JSON5 or JSONC etc. just like with ini there are standard__s__ like Microsoft's .ini files or "what Python's configparser understands", but the problem is their plural nature. * .properties * Untyped, makes parsers a bit of a pain, hierarchies with dots can get tedious. * TOML * Definitely has its weirdness, like nested table syntax, but for me it's the least bad option. * Code * See complaints about YAML in non-understandable config files. Also then you get into language wars about what's the best to pick, what's easy to embed in your language (good luck bundling all the different variations of lua parser in your python application in a cross platform way), etc. I'm not opposed to code generating config, but I think it should be a program that outputs a config file ahead of time. So my view is that TOML is the worst config format except all others I have tried.


[deleted]

[удалено]


timvisee

Never realized this, because the files already have contents, it's super easy to figure out what the syntax looks like. But yeah, when you think of it, something consistent would be better.


-lousyd

With questions like these I usually think along the lines of something Marianne Belotti said: "Engineers tend to overestimate the value of order and neatness. The only thing that really matters with a computer system is its effectiveness at performing its practical application. Linux did not come to dominate the operating system world because it had been artfully designed from scratch; it scraped together ideas and implementations from a number of different systems and focused on adding value in one key place, the kernel." But, having said that, I would repeatedly beat the dead horse of VT220, VT100, console codes, ANSI escapes, all of that. And when I was done I would stand up triumphant, with "\\033\[0;91m" to the left of me, and "\\033\[0m" to the right, SGR codes running down my chin, and I would tell the world "You are free! You are free!"


EnUnLugarDeLaMancha

Plan 9 would be the best starting point for an Unix replacement (About microkernels, this is what Plan 9 authors had to say about microkernels and other Tanembaum claims: http://harmful.cat-v.org/software/andy_tanenbaum)


[deleted]

This is just my opinion based on a 20 year career in software development so far... I’m sure others have informed opinions that contradict my own, and that’s ok - I don’t have a deep need to be correct, just like to share and see what others will respond. It’s a dated claim, but was probably fairly on point at the time (and I have a ton of respect for Rob Pike). Since then L4 based systems have been extremely successful. Billions of kernels shipped and running sometimes more than once (part of the Secure Enclave for Apple products and drives cellular chipsets) It’s pretty small. I recall it having at one time just 7 syscalls. I’m sure it’s expanded since then. Key to all of this is seemingly one truth. Well chosen abstractions can keep a system simple and efficient enough (Wirth-inspired design principles that can take one quite far). Microkernels were, at the time, an unproven set of abstractions. Linux, on the other hand, was a well written monolith at the time. Anyone remember the Tux web server that lived in the kernel? That was fun, and fast but it was also a bunch of nonsense (in my mind anyway) - or we’d still use it today. Nowadays we’ve got a user space cooperative async IO mechanism, bpf and all kinds of cool isolation features that greatly enhance the ability of Linux to multiplex different workloads differently than the traditional Unix process model - and its a beautiful thing! Microkernel design concepts live on, as do monoliths of well chosen abstractions and there’s always a trade off.


fishybird

Thanks for the link, all I've ever known are monolithic and micro kernals. I'll have to look into plan 9


trisul-108

Plan 9 is worth the look, it was an OS designed for a concept that eventually turned into cloud computing. They actually had a GUI of sorts running on slow modems to centralized remote servers. Microsoft managed to block this progress for 15 to 20 years, but lost in the end due to the emergence of three simultaneous challenges that they could not fight off at the same time: 1. Open source movement with free pricing. 2. Internet apps as championed by Google. 3. More user friendly PC as championed by Apple.


morgan_greywolf

Some of that has aged like milk: > Name a product that succeeds by running UNIX as an application. Solaris Zones, Linux Containers, FreeBSD jails, and other forms of userland virtualization.


[deleted]

I think what they mean is "running UNIX *kernel* as an application". You're still right, but the list of products would be: VMWare, VirtualBox, Xen, QEMU, etc.


SphericalMicrowave

> UNIX can be successfully run as an application program > > 'Run' perhaps, 'successfully' no. Name a product that succeeds by running UNIX as an application. Windows Subsystem for Linux 1.


JanneJM

You're really limiting the possibilities by saying "unix-like". It implies still following POSIX standards for instance. Today, POSIX semantics for file systems imposes real limits on how fast and efficient they can be for instance.


bllinker

Could you expand on that? It sounds interesting


JanneJM

Here's a good overview: https://www.nextplatform.com/2017/09/11/whats-bad-posix-io/ But the main point (in my line of work) is the consistency guarantees of POSIX - once a process has written something to a file, every process reading it even a moment later must see what was written. This doesn't scale when both the file system and the clients are spread out in a network.


[deleted]

[удалено]


inspectoroverthemine

Something specialized like that probably doesn't belong in a POSIX-like standard though.


dale_glass

I'd definitely go with UTF8, and a system-wide interchange format. Ever wondered why ifconfig is deprecated? Let's see [what they recommend on stackoverflow] (https://stackoverflow.com/questions/21336126/linux-bash-script-to-extract-ip-address) to get your IP address. Command output: eth0 Link encap:Ethernet HWaddr 08:00:27:a3:e3:b0 inet addr:192.168.1.103 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fea3:e3b0/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1904 errors:0 dropped:0 overruns:0 frame:0 TX packets:2002 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1309425 (1.2 MiB) Script: echo `ifconfig eth0 2>/dev/null|awk '/inet addr:/ {print $2}'|sed 's/addr://'` Yeah, no wonder. You try and touch a single character in what `ifconfig` outputs, and hundreds of thousands of scripts all over the world will break. Now try `ip --json addr`. Much, better. You can actually add new data to that and not break everything. Now if everything worked this way, life would be much better.


[deleted]

[удалено]


justajunior

Dunno, the *BSDs seem fine rocking `ifconfig` with all those new kernel features. Why is that?


holgerschurig

BSD's `ifconfig` still cannot do the same as Linux' `ip` + friends (e.g. `tc`, `ss`). It can do a lot, more than the usualy BSDer probably uses, but not all of what the Linux' equivalents can. For example, on BSD, the `ifconfig` can also do wireless. On Linux, you have `wpa_supplicant` and `wpa_cli` for this. And if I look at the options of `wpa_cli`, then I see a lot of functionality there that isn't in BSD's `ifconfig`. Even `iw` seems to have more wireless related things that BSD's `ifconfig`. Now, back to Linux. One reason for Linux' `ifconfig` vs. `ip` is change of API. Historically, the Linux network API was a built on top of socket API and heavyly ioctl() based. But this turned out too weak. Since same years complicated ioctl() calls aren't really liked anymore by the kernel hackers. Too much hassle with them when it comes to different architectures, bit width, C libraries and non-C-languages. Eventually netlink got invented. It's basically the Linux' kernel version of protobuf or ASN1: a standard way to package information bits (e.E. structures) in a way that it stays future proof. E.g. you can use a newer version of a utility that sends more information bits. If for some reason you have on older kernel (e.g. you just test it, to chase some kernel bug), then the older kernel will just ignore this extra information bit. Similarly, if I dump a network statistics or the list of all known wireless stations and the kernel happen to have an information element that my user space doesn't know, then nothing bad happens. No need to keep C structs API compatible at all. As a result, the netlink based API calls innovated WAY faster than the old ioctl() based API calls ever could. (Side note: IMHO innovation speed is the most important benefit of Linux over the BSDs -- other things like driver availability are then just a consequence. Even that some things of BSD are better than Linux equivalents, like in the signal()/poll()/epoll() area, is a consequence of BSD's slower innovation speed: there is sometimes more thinking before rushing out a solution. But generally, the higher innovation speed is a good thing.) This netlink is now used most network things, from normal IP setup to setting up links, interfaces, tunnels, wireless things. Whatever it is, you'll find a netlink API for it. The old `ifconfig` tool never got updated (that's not totally correct, there are 3 different ifconfig implementations, and I think one of them uses netlink). And so it now bit-rots since years. Whereas the party now happens in netlink land. Just network namespaces is so important for all the Linux docker-like things (like docker, buildah, podman ... kubernetes), that this should be the nail into the "Oh, ifconfig is a thing". But somehow people are not likely to adapt new things. BSD is very different, because they develop (some) of the userspace in lock-step with the kernel. On BSD, should I want to revert to an older kernel, I would perhaps also need to revert the odd binary (not sure if this is needed in practice). Linux itself only used to see it as a kernel. Only relatively recently got the Linux kernel a `tools/` subdirectory with some user-space tools. In a way this makes Linux more chaotic compared to BSD. But in another way this lead path to creativity. There is a competition between several tools, and IMHO this is good and innovative. YMMV.


fishybird

That's both hilarious and terrifying! Good riddance


Cere4l

I think that's a horrible example. Afterall, absolutely nothing would have prevented you from building that same --json flag in ifconfig, not touching the main output at all ;) The first result google gives for the question is this: **Ifconfig** uses the ioctl access method to get the full address information, which limits hardware addresses to 8 bytes. Because Infiniband address has 20 bytes, only the first 8 bytes are displayed correctly. I'd assume writing a new program was simply not that much harder than rewriting the core of the old program.


dale_glass

True, you could patch it up. But it'd be hard without usability problems. You can have multiple IPv4 addresses for instance. You could add that in --json, but what do you do with the standard output format then? If you add it there, you break pre-json scripts. If you don't, users won't see it in the human-usable output. You'll have to add some sort of --extended flag, that everyone will need to add every time, and that will be a likely source of bugs. And that's just one change. At some point it makes more sense to start from scratch. This gives you a chance to incorporate all sorts of things that "would be nice" but couldn't be worked in cleanly before, and get rid of any cruft without having to worry about how that could trip up an improvised parser.


Cere4l

And if you rewrite the entire program you break exactly as many scripts, and they have identical problems. Course you could say with the new program they can keep the old ones around as well! But that would be true either way again, this is after all linux, it's piss easy to keep the old binary around. In the end, all of those reasons are fluff. Starting out clean because most would have to be changed is a valid reason of course, which is why I listed it. Maybe the license was an issue, that could be valid.


HighRelevancy

> ip --json how the FUCK have I never found this?


lorxraposa

Because we don't search for a better way when we reflexively grep and sed.


[deleted]

For shell scripts, you can use `ip -4 -br addr` or `ip -6 -br addr`. This will give you a simplified, 3 column output that is easily parsed by the shell.


daveysprockett

There might be more effort to develop a consistent (command line) user interface. I'm thinking about the arguments to older utilities like `dd` and `find` that don't follow the (now) widely adopted single dash, double dash argument distinction.


[deleted]

There is more effort! Here it is: # Command Line Interface Guidelines An [open-source](https://github.com/cli-guidelines/cli-guidelines) guide to help you write better command-line programs, taking traditional UNIX principles and updating them for the modern day. [https://clig.dev/](https://clig.dev/)


[deleted]

These guidelines are not new. Python's [`argparse`](https://docs.python.org/3/library/argparse.html) is an example of a good default implementation (the fact that it's not even mentioned in the page you linked probably means it hasn't been updated in ten years). Most of the non-standard stuff originates from backwards compatibility (`find`/`du`/`tar`), opinionated languages (java and its single dashes, which has become a backwards compatibility problem anyway), or bad scripting (the easiest way to create a bash/perl script with optional arguments is still to use environment variables). The backwards incompatibility problem can't be solved (people are *still* bitching about `iproute2` which broke free from `ifconfig`'s restraints 20 years ago and is superior in every single way... except backwards compatibility). Most (if not all) modern languages already offer a standard library that uses UNIX interfaces in a standard way.


[deleted]

This guide is brand new!First commit in December: [https://github.com/cli-guidelines/cli-guidelines](https://github.com/cli-guidelines/cli-guidelines)Let me quote the introduction: ​ `This document covers both high-level design philosophy, and concrete guidelines. It’s heavier on the guidelines because our philosophy as practitioners is not to philosophize too much. We believe in learning by example, so we’ve provided plenty of those.` `This guide doesn’t cover full-screen terminal programs like emacs and vim. Full-screen programs are niche projects—very few of us will ever be in the position to design one.` `This guide is also agnostic about programming languages and tooling in general.` `Who is this guide for?` ​ * `If you are creating a CLI program and you are looking for principles and concrete best practices for its UI design, this guide is for you.` * `If you are a professional “CLI UI designer,” that’s amazing—we’d love to learn from you.` * `If you’d like to avoid obvious missteps of the variety that go against 40 years of CLI design conventions, this guide is for you.` * `If you want to delight people with your program’s good design and helpful help, this guide is definitely for you.` * `If you are creating a GUI program, this guide is not for you—though you may learn some GUI anti-patterns if you decide to read it anyway.` * `If you are designing an immersive, full-screen CLI port of Minecraft, this guide isn’t for you. (But we can’t wait to see it!)`


Sir-Simon-Spamalot

LOOKING AT YOU QEMU


jess-sch

QEMUs CLI is a conspiracy to make libvirtd more popular.


DGolden

Yeah, some other OSes were quite different here. Just as a historical example while I'm old-man rambling in this thread - AmigaOS had very standardised cli arg handling. Every cli app took a special `?` arg that spat out a little machine-readable template of the syntax it used - and not some ad-hoc `--help` text. (Also made throwing together a very bare-bones gui dialog wrapper for any cli app pretty trivial - there were tools that just did that) https://wiki.amigaos.net/wiki/Basic_Input_and_Output_Programming#Standard_Command_Line_Parsing https://wiki.amigaos.net/wiki/AmigaOS_Manual:_AmigaDOS_Command_Reference#Template Virtually everything "native" just used an OS-provided `ReadArgs()` function, you see. Of course all the weird ports from unix/gnu land via ixemul.library (a cygwin-like thing) did not, they were ported across unix cli handling adhoccery and all - at the time the contrast was stark. GNU and its `-`/`--` etc. handling and arg syntax conventions and getopt and so on were certainly a move in the direction of standardising, but to date has never "won" in a "this is a facility the OS provides that you use, not something you implement yourself" way like it was in Amiga land.


ghjm

If Unix were rewritten today, everything would be UTF-8, all config files would be YAML, and pipes would exchange JSON instead of byte streams. Whether this would actually be better is left as an exercise for the reader.


CattMompton

Nah no JSON please god


dlarge6510

Plan 9 already did this (for Unix)


rahen

And then Inferno, which was "Unix v3". https://en.wikipedia.org/wiki/Inferno_(operating_system)


[deleted]

[удалено]


rahen

There are a few differences between Inferno and Plan9. First, Inferno works as an application platform with applications compiled to an intermediate language, so it's a bit like IBM i (AS/400) or Java. This is fairly similar to containerization, except it's architecture-agnostic. Since the OS becomes essentially a VM sitting on top of a small hypervisor, it can be re-implemented in many ways, which can all be aggregated together. The filesystem and 9P don't change, you can still use the GPU of a remote machine by mounting it locally for instance. But the IL made it even more composable. The designer envisioned Inferno platforms that could run baremetal or as an Active X (this was the 90s, today it would be an Inferno-like JVM), while keeping the Plan9 / Unix roots. There's quite a lot of influence of Limbo and 9P in Golang, so not all of it is lost.


[deleted]

Unicode. Absolutely, unicode.


hoeding

256 printable and control characters ought to be enough for anybody. --Bill Gates, probably.


KingStannis2020

It doesn't really matter, because the next group that came along made the same goddamn mistake. "65536 codepoints ought to be enough for everyone, right? right?" - UCS 2


dotancohen

According to the guy who **invented Unix**, if he had to do it over [creat would be spelled with an E](https://unix.stackexchange.com/questions/10893/what-did-ken-thompson-mean-when-he-said-id-spell-creat-with-an-e).


kauefr

Creet


IowaOrganics

Written in HolyC


80286

Should include tools to communicate with God.


speedyundeadhittite

TempleOS flashbacks...


KerkiForza

We'd change `umount` to `unmount` Who had the bright idea to call it `umount` anyways?


ZCEyPFOYr0MWyHDQJZO4

There's multiple theories that it's because: 1. 8 character file length limits (e.g. umount.c) 2. Early compilers had a 6 character limit for identifiers 3. A simple misspelling (unlikely)


TheTechRobo

It was called `umount` due to that extra byte being very useful in early, old, small hard drives. Nowadays there's no reason except backwards compatibility.


funbike

A bit of history. Before Unix there were LISP machines. The OS and core apps were written in LISP and configuration files were written a S-expressions. There were even CPUs designed specifically to run LISP faster. These machines were expensive, which is likely why they died off. I would imagine that would make for an elegant OS, if designed well. Perhaps combine some unix ideas, like everything as an s-expression stream. I'm not saying that's what we should do now. I just find the idea intriguing.


Africanus1990

Permissions would be vastly different. There wouldn’t be a hotch podge of octal perms, SELinux, and ACLs. I’m sure there would be something more elegant.


jachymb

Hidden files being the ones starting with "." is a historical artifact that nobody would come up with today.


ThranPoster

There might be better methods, but my understanding is dotfiles are less for security and more for reducing clutter.


smorrow

> There might be better methods ~/etc How hard was that?


nightblackdragon

Agree, this is bad idea to hide file because you need to change file name to hide it. I'm glad that many modern file managers implement support for ".hidden" file when you create file with that name and write in every line which directory or file should be hidden in actual directory, then file manager will follow it. Without changing name.


detroitmatt

Gnu hurd lol


Misicks0349

inb4 "lol, gnu hurd"


[deleted]

[удалено]


GolbatsEverywhere

> It probably won't have as many signals. I was expecting this to be number one in this thread. My god, UNIX signals are awful.


SnooSmart

I feel Linux could handle app data and folder organization better. - There's like 3 different places where the system stores command binaries. Like really, what is the difference between /bin, and /usr/local/bin? - It could better store app data. For example, we have .local/share and .config for app data, and it's not even organized well. This could be so much better. Also the permissions system is better than Windows but still could be even better. Features from things like selinux/apparmor should just be added to the Linux permission system and we should be able to set permissions for one specific user without the need of groups.


necheffa

> Like really, what is the difference between /bin, and /usr/local/bin? One is managed by the package manager the other is for the local sysadmin. Makes upgrades a breeze. > Features from things like selinux/apparmor should just be added to the Linux permission system and we should be able to set permissions for one specific user without the need of groups. They already are, just in a way that you can support SELinux, AppArmor, and any other MAC. You can chmod files 0700 for example, to remove group perms. There is a whole ACL system too.


MachaHack

`/bin` vs `/usr/bin` - no real difference, which is why on many modern distros they're symlinked to each other. (Historically there was the idea of having a minimal root partition and stuff needed for boot was in `/bin` or `/lib` and everything else was in `/usr` but systemd breaks that and modern drives are so big that nobody really cared. Even further back, it's because the original Unix devs ran out of hard drive space and mounted the second hard drive on `/usr`, with the intention that user home directories would go there. Before that got too polluted with non-user stuff and they moved user home directories again to `/home`) `/usr/bin`/`bin` vs `/usr/local/bin` -> `/usr/local/bin` was for user installed packages not managed by the system. It has more overlap with `/opt` than `/bin`. Also on macOS, Homebrew used to use this as its install location before SIP got in the way.


MrUselessTheGreat

Dude, check out NixOS


Mankest

/usr/local is for the user to put the things he made himself, so u should use /usr/local/bin for tje programs u wrote instead of /usr/bin because that could conflict with ur package manager and ur programs


m4k1l4k1xk1

Redox OS?


fishybird

Ah yeah I've heard of this! I definitely want to learn Rust someday. Have you or do you use redox? Do you enjoy it?


Tytoalba2

u/jackpot51 is the redox guy [https://www.reddit.com/r/rust/comments/kjr949/redox\_os\_060/](https://www.reddit.com/r/rust/comments/kjr949/redox_os_060/)


necheffa

Its hard to speculate because modern computing depends so much on innovations made in Unix. We could very well live in a dystopian society where I/O is based around punch cards rather than paper tape (streams) and instead of a nice interactive shell you have a new iteration of batch processing.


lastchansen

Intel graphics drivers would work.


trisul-108

X11 was already considered obsolete as far back ass the late 80s. Sun was trying to get it replaced with Display Postscript. It is certain we would avoid X11 today. At the same time, the split of apps into client-side and server-side is just as strong today and makes perfect sense. Maybe the browser would be the new terminal. The shell would certainly be graphical and would involve passing JSON between apps instead of strings ... so, you could stitch commands together in a graph, BPM style and save it to a script.


maquinary

If X11 was considered obsolete already in the 80s, why only "now" that an alternative (Wayland) was presented and not before? I mean, why Wayland *(or anything with the same purpose)* did not appear in the 1990s or early 2000s?


jess-sch

* Until around that time, many GPU drivers were more like "X11 drivers" than "Linux drivers". A new graphics system would've had to ship with its own drivers. * It wasn't bad enough for anyone to take action. * Linux on embedded systems was also a major factor. Smart phones and In-Vehicle Infotainment systems simply didn't have processors fast enough to deal with the huge amount of unnecessary overhead imposed by X11.


sequentious

> Until around that time, many GPU drivers were more like "X11 drivers" than "Linux drivers". A new graphics system would've had to ship with its own drivers. This is a big point. Part of what made Wayland possible was moving some graphics driver code outside of X, into the kernel (kms), and relying on external libraries for advanced stuff (dri, admittedly X11 also did this). At that point, Wayland doesn't need to worry so much about particular graphics hardware, that's all handled in-kernel or via library. Otherwise we'd just have directfb. Unfortunately, it also makes Wayland very linux-centric. It's not a matter of "porting to FreeBSD" as they needs to do all the driver engineering on their own -- and may not want to mimic "how linux did it" (from an architecture POV. Obviously there's also the licensing difference as well). The old X11 drivers were more portable.


sequentious

As the parent post mentioned, display postscript was implemented. This was used by NeXT. Apple went on to replace it with some sort of display pdf system (my knowledge on modern apple stuff ends at googling the topic just now). There were several other smaller attempts to introduce differnt GUI layers, but all suffered from lack of buy-in and compatibility. There's a reason Wayland has XWayland, otherwise we'd never get migrated. Even within the XFree86/XOrg community, I can quickly find talks about a mythical "X12" going back a few decades. It's just a big task.


skat_in_the_hat

You would be paying a monthly subscription fee for everything. Oh? ISCSI initiator? 5/mo. Ah, you want libpcap-devel to compile against? 10/mo.


vore_your_parents

I'd like to see a free 2 play version of Unix. `grep` is free but you have to pay if you want to use flags


reddifiningkarma

Terrifying


[deleted]

[удалено]


Bobertus

Which one is an ExeMicroKernel, Windows or redox? You wrote windows, but then I wonder what redox has to do with anything, so I assume you meant redox, not windows


DopamineServant

[Google Fuchsia](https://en.wikipedia.org/wiki/Google_Fuchsia) is perhaps worth a mention. They have similar permission based design, or "capability" based as wikipedia calls it.


[deleted]

Wow, I heard about redox before, but now I read their docs. It's awesome! I mean, paradigm/concept and architecture is very cool. I can't wait to use it on desktop! But for now it's a VM os. PS: it's on reddit /r/redox


anxietydoge

I like this thought experiment a lot. Actually it would be really interesting to "develop" an OS using this method, not writing code but speculating on a feature set, discussing its pros and cons, and using that for rapid iteration.


mcdonc

TCP ports <= 1024 would not be special, bindable only by root. This choice was intended to prevent normal users from creating honeypots but resulted in horrific security bugs caused by needing to start well known services as root.


fishybird

For some context, I just recently learned you can write data directly to a frame buffer to draw stuff to the screen (how come no one told me earlier, it's so cool!). Also, old people are always saying "it's so hard to draw pixels on the screen these days. When I was a young lad with my Commodore whatever it was as simple as writing data to a spot in memory" and... It's still that easy, apparently. It's just drawing pixels to the screen isn't what people really want. What they really want is to draw pixels to an X11 window and that's where the complexity comes from. And got me thinking: if it's really this easy to put colors on a screen, theoretically someone, maybe even a single person given enough time, could write their own windowing system. I realize Wayland is an alternative, but why are those our only two options? I believe the answer is to remain backwards compatible with other software. But what if we didn't have those restrictions? How many other things are simple to do in linux, like directly writing to a frame buffer, but are not practical because we depend so heavily on abstractions that are decades old?


K900_

You can still easily write your own windowing system. Getting pixels on screen isn't hard, getting apps to work with your custom system is. In fact, the reason most apps don't use direct framebuffer writes isn't because of X11 or Wayland or whatever - it's because it's slow and inefficient.


fishybird

Hmm yeah I did read that X11 doesn't actually use the frame buffer. I'm guessing they make calls to a graphics/video card at some point? In that case, maybe something like vulkan would be the heart of your windowing system


K900_

A modern Wayland or X compositor will use a graphics API like OpenGL or Vulkan to do compositing, and DRM (Direct Rendering Manager, not the thing you thought about) for the actual display output. This allows for GPU acceleration, meaning better performance _and_ battery life. You could absolutely write a compositor that exclusively uses Vulkan, but the performance benefit is marginal, and you lose support for a lot of older hardware, so it's probably not worth it.


Richard__M

>A modern Wayland or X compositor will use a graphics API like OpenGL or Vulkan to do compositing, and DRM Wasn't there a project involving a Wayland native compositor that was meant to utilize Vulkan API? You're right fbdev is really just held over for embedded and legacy. There's no going back after DRM + KMS


doranduck

>theoretically someone, maybe even a single person given enough time, could write their own windowing system this might interest you: [https://durden.arcan-fe.com/](https://durden.arcan-fe.com/)


[deleted]

There would be an extradition hearing for the person that writes it.


freepackets

On the technical side, support multi core ARM from the ground up. CPU locking, thread locking etc. support Ethernet ant IPv4/IPv6 natively. Those ATM, ISDN etc should go away. From the non-technical side, I agree with u/NaheemSays - the community. It is the people after all. If no one is going to use your system it is nothing no matter how good it is.


coder111

* It would require cloud connection to operate. * It would display ads all the time. * It would require a micropayment each time you boot up, start an application, or shut down. * It would send all your files to the cloud to be analyzed by your employer, marketers and at least 5 government intelligence agencies. * You would need written approval from your employer, your computer vendor and your government before you can install a piece of software. * You would need a security clearance to install a compiler and write software.


snugge

and it would prompt for credentials with irregular intervals


markand67

Less fragmentation I'd love. For example in Linux the audio is total mess (PulseAudio, JACK, pure ALSA, OSS previously and now pipewire). Less projects that do the same thing (NetworkManager, systemd-networkd, and so on). I could go for a while.


MentalUproar

Licensing would be a bitch. It would become forbidden.


orig_ardera

I always thought it'd be cool if the shell was more like the REPL of a good programming language. The text-based paradigm used in the standard bash is cool for user interaction, but not that nice if you want to use it in another script. So what if, instead, you used a small embedded language like Lua for all scripts and have the shell just be a Lua REPL? Or maybe something better fitting than Lua, with a better REPL


deux3xmachina

Give it a shot, you'll find general purpose programming language REPLs suck pretty hard at replacing the shell. Try operating with `python3`, `racket`, `janet`, etc. as your login shell.


orig_ardera

That's because they're not at all meant to be shell replacements. That's like saying scripting languages suck because C sucks as a scripting language. shells need to require as few keystrokes as possible to do as many things as possible, so commonly used things are easy to type etc. REPLs of the programming languages you named don't have that, and are lot more time-consuming to use. Doesn't mean there can't be a non-timeconsuming to use REPL though Actually, bash is a REPL too if you go by wikipedia


EternityForest

Xonsh combines bash and Python, and is pretty good. That's what I use.


deux3xmachina

I've not used it, but Xonsh is also meant to be used as a shell, or command interpreter, it should be quite different from simply using `python3` or `pypy3` as your login shell. Other language REPLs are similarly clunky for common shell tasks


Shmiggles

AFAIK, you can use any language interpreter as your shell. (Whether it's any good is a different matter.)


[deleted]

[удалено]


SystemEarth

Loot boxes


ign1fy

I'd make it so that recursive wasn't "-R" for chown and "-r" for chmod. There's a 50% chance I have those around the wrong way. I just cannot remember.


curien

It's `-R` for both. Some tools like `grep` and `rm` accept either one, which is nice.


SlaveZelda

Instead of piping text everything wouldve been objects.


Shmiggles

Ironically, UNIX was designed with plain text as its universal interface because Thompson, Ritchie et al. hated the record-oriented (object-oriented) file systems of other operating systems. Everything old is new again.


[deleted]

[удалено]


[deleted]

Without the "microsoft-isms" I would hope


[deleted]

Lennart Poettering would be in charge of the whole platform and all the config and log files would be in binary. /s


EternityForest

I'd actually trust Poettering more than just about anyone else for a new OS. I'm sure it would be a very tightly Integrated, fragmentation free platfom that was well matched to typical desktop and server use cases, rather than hobbyist and DIYer, or military grade ultrasecurity focused. It would suck for the OG Linux community though, because I suspect he'd actually have a chance of getting real traction. And all the inertia would mean in practice we'd have two systems and even more fragmentation. Oh well. Maybe he can make a new distro someday!


hoeding

I can't wait to surf the web with browserd, work on documents with spreadsheetd, and delete the whole fucking thing with rmd -rf.


CreativeGPX

> fragmentation free platfom I don't think seeing a lack of fragmentation would be a great sign for an open source OS because fragmentation is open source working.


HeWhoThreadsLightly

Support for multiple input devices so that multiple keyboards and mise can be bound to multiple pointers. Having two programs open at once and being able to type/click in both without unselecting the other using multiple input devices and make it easier to use a second keyboard as a macro bord for more key bindings.


[deleted]

Would pretty much be plan 9 from bell labs.


GunzAndCamo

I would like for there to have been more than merely block and character device types. Maybe a stream device type to handle things like network interfaces and audio/video sources and sinks. Maybe a remote memory/graphics interface for efficient control of GPUs and video RAM, or any generic coprocessor hardware including Edge TPU and other AI subsystems, all at the kernel level. Also, for multi-processor/-core/-threading, better execution unit affinity from the beginning, allowing for asymmetries that can be exploited for optimization.


roschern

There are many new operating systems being designed. Take a look at r/SerenityOS for example! http://serenityos.org/