I thought I'll make this thread for all of you out there who have questions but are afraid to ask them. This is your chance!
I'll try my best to answer any questions here, but I hope others in the community will contribute too!
Mods, perhaps a weekly post like this would be beneficial? Lowering the bar to entry with some available support and helping to keep converts.
Yeah I was thinking the same. Perhaps make a sticky post about it once a week.
How do symlinks work from the point of view of software?
Imagine I have a file in my downloads folder called movie.mp4, and I have a symlink to it in my home folder.
Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it's a filesystem level stuff and software (player) basically has no idea if a file I'm opening is a symlink or the original movie.mp4?
Can I use sync software (like Dropbox, Gdrive or whatever) to sync symlinks? Can I use sync software to sync actual files, but only have symlinks in my sync folder?
Is there a rule of thumb to predict how software behaves when dealing with symlinks?
I just don't grok symbolic links.
A symlink works more closely to the first way you described it. The software opening a symlink has to actually follow it. It's possible for a software to not follow the symlink (either intentionally or not).
So your sync software has to actually be able to follow symlinks. I'm not familiar with how gdrive and similar solutions work, but I know this is possible with something like rsync
So I guess it's something like pressing ctrl+c: most software doesn't specifically handle this hotkey so in general it will interrupt a running process, but software can choose to handle it differently (like in vim ctrl+C does not interrupt it).
Thanks.
Fun fact: pressing X (close button) on a window does not make it that your app is closed, it just sends a signal that you wish to close it, your app can choose what to do with this signal.
Software opens a symlink the same way as a regular file. The kernel reads a path stored in a symlink and then opens a file with that path (or returns a error if unable to do this for some reason). But if a program needs to perform specific actions on symlinks, it is able to check the file type and resolve symlink path.
To determine how some specific software handle symlinks, read its documentation. It may have settigs like "follow symlinks" or "don't follow symlinks".
Whenever I open the symlink, does the software (player) understand «oh this file seems like a symlink, I should go and open the original file», or it’s a filesystem level stuff and software (player) basically has no idea if a file I’m opening is a symlink or the original movie.mp4?
Others have answered well already, I just will say that symlinks work at the filesystem level, but the operating system is specially programmed to work with them. When a program asks the operating system to open a file at a given path, the OS will automatically "reference" the link, meaning it will detect a symlink and jump to the place where the symlink is pointing.
A program may choose to inspect whether a file is a symlink or not. By default, when a program opens a file, it simply allows the operating system to reference the file path for it.
But some apps that work on directories and files together (like "
find
", "tar
", "zip
", or "git
") do need to worry about symlinks, and will check if a path is a symlink before deciding whether to reference it. For example, you can ask the "find
" command to list only symlinks without referencing them:find -type l
Symlinks are fully transparent for all software just opening the file etc.
If the software really cares about this (like file managers) they can simply ask the Linux kernel for additional information, like what type of file it is.
its a pointer.
E: Okay so someone downvoted “it’s a pointer”. Here goes. both hard links and symbolic links are pointers.
The hard link is a pointer to a spot on the block device, whereas the symbolic link is a pointer to the location in the filesystems list of shit.
That location in the filesystems list of shit is also a pointer.
So like if you have /var/2girls1cup.mov, and you click it, the os looks in the file system and sees that /var/2girls1cup.mov means 0x123456EF and it looks there to start reading data.
If you make a symlink to /var/2girls1cup.mov in /bin called “ls” then when you type “ls”, the os looks at the file in /bin/ls, sees that it points to /var/2girls1cup.mov, looks in the file system and sees that it’s at 0x123456EF and starts reading data there.
If you made a hard link in /bin called ls it would be a pointer to the location on the block device, 0x123456EF. You’d type “ls” and the os would look in the file system for /bin/ls, see that /bin/ls means 0x123456EF and start reading data from there.
Okay but who fucking cares? This is stupid!
If you made /bin/ls into /var/2girls1cup.mov with a symlink then you could use normal tools to work with it, looking at where it points, it’s attributes etc and like delete just the link or fully follow (dereference) the link and delete all the links in the chain including the last one which is the filesystems pointer to 0x123456EF called /var/2girls1cup.mov in our example.
If you made /bin/ls into a hardlink to 0x123456EF, then when you did stuff to it the os wouldn’t know it’s also called /var/2girls1cup.mov and when /bin/ls didn’t work as expected you’d have to diff the output of mediainfo on both files to see that it’s the same thing and then look where on the hard drive /var/2girls1cup.mov and /bin/ls point to and compare em to see oh, someone replaced my ls with a shock video using a hard link.
When you delete the /bin/ls hardlink, the os deletes the entry in the file system pointing to 0x123456EF and you are able to put normal /bin/ls back again. Deleting the hard link wouldn’t actually remove the data that comprises that file off the drive because “deleting” a “file” is just removing the file systems record that there’s something there to be aware of.
If instead of deleting the /bin/ls hardlink, you opened it up and replaced the video portion of its data with the music video to never gonna give you up, then when someone tried to open /var/2girls1cup.mov they’d instead see that music video.
if that is, the file wasn’t moved to another place on the block device when you changed it. Never gonna give you up has a much longer running time than 2girls1cup and without significant compression the os is gonna end up putting /bin/ls in a different place in the block device that can accommodate the longer data stream. If the os does that when you get done modifying your 2girls1cup /bin/ls into rickroll then /bin/ls will point to 0x654321EF or something and only you will experience astleys dulcet tones when you use ls, the old 0x123456EF location will still contain the data that /var/2girls1cup.mov is meant to point to and you will have played yourself.
Okay with all that said: how does the os know what to do when one of its standard utilities encounters a symlink? They have a standard behavior! It’s usually to “follow” (dereference) the link. What the fuck good would a symbolic link be if it didn’t get treated normally? Sometimes though, like with “ls” or “rm” you might want to see more information or just delete the link. In those cases you gotta look at how the software you’re trying to use treats links.
Or you can just make some directories and files with touch and try what you wanna do and see what happens, that’s what I do.
Don’t think there is.
system32 holds files that are in various places in Linux, because Windows often puts libraries with binaries and Linux shares them.
The bash in /bin depends on libraries in /lib for example.
Why do programs install somewhere instead of asking me where to?
EDIT: Thank you all, well explained.
Because Linux and the programs themselves expect specific files to be placed in specific places, rather than bunch of files in a single program directory like you have in Windows or (hidden) MacOS.
If you compile programs yourself you can choose to put things in different places. Some software is also built to be more self contained, like the Linux binaries of Firefox.
Actually, windows puts 95% of it files in a single directory, and sometimes you get a surprise DLL in your \system[32] folder.
Expanding on the other explanations. On Windows, it's fairly common for applications to come with a copy of everything they use in the form of DLL files, and you end up with many copies of various versions of those.
On Linux, the package manager manages all of that. So if say, an app needs GTK, then the package manager makes sure GTK is also installed. And since your distribution's package manager manages everything and mostly all from source code, you get a version of the app specifically compiled for that version of GTK the distribution provides.
So if we were to do it kind of the Windows way, it would very, very quickly become a mess because it's not just one big self contained package you drop in
C:\Program Files
. Linux follows the FSH which roughly defines where things should be. Binaries go to/usr/bin
, libraries to/usr/lib
, shared files go to/usr/shared
. A bunch of those locations are somewhat special, for example .desktop files in/usr/share/applications
show up in the menu to launch them. That said Linux does have a location for big standalone packages: that's usually/opt
.There's advantages and inconveniences with both methods. The Linux way has the advantage of being able to update libraries for all apps at once, and reduce clutter and things are generally more organized. You can guess where an icon file will be located most of the time because they all go to the same place, usually with a naming convention as well.
different strokes.
windows comes from the personal computing world and retains a bunch of stuff from it to this very day for no good reason, in this case there used to be no guarantee that a particular installation target would have the target directory mapped in a consistent way so the installer would make a guess and give the user a chance to change it.
if that sounds stupid, it is. no one writes in assembly anymore, they target the OS and nowadays the OS will have a consistent set of folders to install stuff to. we all know where the program "should" be installed to already.
but it didn't used to be like that in the PC world! used to be your computer wasn't a fixed purpose windows computer from the jump, never to be anything else. there were different OSes that people would use regularly and even different DOS environments which a person could use to run programs under. Hard disks weren't disks inside the machine, but big beige external disks that you'd plug up, set beside the computer and access after booting. in that setup where a programmer targeted DOS (if they cared about the execution environment at all and didn't just write for the processor) it made sense to ask where someone was gonna want to install their software, and to what extent they'd even want to start dirtying up the media they paid good money for with some knuckleheads weird files from some goofy program on a stack of floppy disks.
linux comes from the unix world, where the question of where something installs is easy and straightforward: it installs in $PATH. what is $PATH? it's where the os will look when you try to run something to see if it can run any program by that name. if a program isn't installed in $PATH then when you type its' name in and hit enter the computer won't know what the hell youre talking about and you'll have to type it's whole ass location out and hit enter.
Why didn't unix systems that linux imitates ask you where to install stuff? because usually it wasn't your choice! linux was unix for personal computers and unix was run on systems that took up whole rooms with all sorts of equipment. you might be the user of that system but never have access to the room with all the spinning disks and flashing lights, stuck on a terminal dialing in over a serial line.
so the assumption was that you'd have a variable in your user environment that would say where things were installed but not that you'd have the ability to change it or even install things.
so why in a linux environment would you ever install anything outside of $PATH or even want to be sure where something's installed at all?
even under linux it can be useful to do either. installing outside of path keeps programs from being accidentally autocompleted or invoked. installing in a particular component of $PATH ($PATH can be many directories!) lets you put serious business programs that demand maximum performance on faster media.
so why the hell won't linux systems give you the option of installing in a specific location or outside of $PATH altogether?
they will, but unlike windows, they don't ask you. unless you specifically ask to do that unique and very abnormal operation, they just do the usual thing. when you want to install weirdly you gotta dig into your package manager and packaging system. sometimes you unzip a package and change a line in a file then zip it back up and install from your modified version.
NixOS. I don't get what it really is or does? It's a Linux distribution but with ceavets or something
Instead of installing packages through a package manager one at a time and configuring your system by digging into individual config files, NixOS has you write a single config file with all your settings and programs declared. This lets you more easily configure your system and have a completely reproducible system by just copying your nix files to another nixos machine and rebuilding.
It’s also an immutable distribution, so the base system files are only modified when rebuilding the whole system from your config, but during runtime it’s read only for security and stability.
I want to start with Btrfs and snapshots, is there a good, beginner friendly tutorial for those coming from a ext* filesystem?
If you try a distro that does it by default then it is no more complicated then ext4 for the user. The distro will setup things for you. I know that opensuse Tumbleweed and Fedora Workstation set this up by default. Manually configuring is how ever a bit more complicated.
Great question!
EndeavourOS has a great little wiki of tutorials around BTRFS and setting up snapshots, that's a lot more friendly than just reading wiki manuals.
Here's a link to the one about getting snapshots and rollbacks set up.
Alternatively, I run OpenSUSE Tumbleweed on my main production rig and it uses BTRFS and sets up snapshots from the GRUB menu for you by default!
I'm also using Nvidia, so while it's gotten better and I haven't had to roll back in a long time, Snapper has saved my butt once or twice in the past. ;)
Ctrl Alt f1 f2 etc. Why do these desktops/cli exist. What was their intended purpose and what do people use them for today? Is it just legacy of does it stll serve a purpose?
Useful if your gui breaks or if you uninstall all your terminal emulators
They are TTYs, they're like terminals your computer spawns at boot time that you can use. Their intended purpose is really whatever you need them for. I use them for if I somehow mess up my display configuration and I need to access a terminal, but I can't launch my DE/WM.
Each one is a virtual terminal and you can use them just like any other terminal. They exist because the easiest way to put some kind of a interactive display up is to just write text to a framebuffer and that’s exactly what your computer does when it boots and shows all that scrolling stuff. The different ones are just different framebuffers that the video card is asked to display when you push ctrl-alt-fnumber. You can add more or disable them altogether if you like.
Years ago my daily driver was a relatively tricked out compaq laptop and I used a combination of the highest mode set I could get, tmux and a bunch of curses based utilities to stay out of x for as much of the time as I could.
I mean, each vt had a slightly different colored background image, the text colors were configured, it was slick.
I used to treat them like multiple desktops.
With libcaca I was even able to watch movies on it without x.
I still use them when x breaks, which did happen last year to my surprise. If your adapter supports a vesa mode that’s appropriate to your monitor then you can use one with very fresh looking fonts and have everything look clean. Set you a background image and you’re off to the races with ncurses programs.
Any word on the next generation of matrix math acceleration hardware? Is anything currently getting integrated into the kernel? Where are the gource branches looking interesting for hardware pulls and merges?
matrix math acceleration hardware?
Can't speak on that but if you want to get news about recent kernel developments (as well as hardware development) you should check out Phoronix.
By default, you can just type nvidia in the software store and click install, wait 5 to 10 minutes after it finishes and restart.
But you will need to run one command before you restart, to register it with secureboot:
sudo kmodgenca -a sudo mokutil --import /etc/pki/akmods/certs/public_key.der
See: https://rpmfusion.org/Howto/Secure%20Boot
I use ublue, so I never need to deal with this.
Both Debian and Ubuntu come with nonfree firmware blobs by default. Nonfree drivers such as the Nvidia proprietary driver can be installed graphically in Ubuntu if you open the drivers app.
Debian instructions are here and involves adding the
non-free contrib
repos to your/etc/apt/sources.list
and then installing thenvidia-driver
package
How can I run a sudo command automatically on startup? I need to run sudo alsactl restore to mute my microphone from playing In my own headphones on every reboot. Surely I can delegate that to the system somehow?
If you run a systemd distro (which is most distro, arch, debian, fedora, and most of their derivatives), you can create a service file, which will autostart as root on startup.
The service file
/etc/systemd/system/<your service>.service
should like[Unit] Description=some description [Service] ExecStart=alsactrl restore [Install] WantedBy=multi-user.target
then
systemctl enable <your service>.service --now
you can check its status via
systemctl status <your service>.service
you will need to change
<your service>
to your desired service name.For details, read: https://linuxhandbook.com/create-systemd-services/
This one seemed perfect but nothing lasts after the reboot for whatever reason. If i manually re-enable the service its all good so I suspect theres no issue with the below - I added the after=multi-user.target after the first time it didn't hold after reboot.
[Unit] Description=Runs alsactl restore to fix microphone loop into headphones After=multi-user.target [Service] ExecStart=alsactl restore [Install] WantedBy=multi-user.target
When I run a status check it shows it deactivates as soon as it runs
Apr 11 20:32:24 XXXXX systemd[1]: Started Runs alsactl restore to fix microphone loop into headphones. Apr 11 20:32:24 XXXXX systemd[1]: alsactl-restore.service: Deactivated successfully.
Does
after=...
solve the problem or cause the problem? Sorry, I cannot parse what you were trying to say.It seems to have no effect either way. Originally I attempted without, then when it didn't hold after a reboot and some further reading I added the After= line in attempt to ensure the service isn't trying to initiate before it should be possible.
I can manually enable the service with or without the After= line with the same results of it actually working. Just doesn't hold after a reboot.
That is interesting. BTW, I don't assume that command will run forever right, i.e. it will terminate relatively soon? so that could be why the service is deactivated, not because it is not run. You can try to add
; echo "command terminated"
at the end of ExecStart to see if it is terminated, you can also try to echo the exit code to debug.If the program you use has a verbose mode, you can also try to turn it on to see if there is any error. EDIT: indeed,
alsactrl restore --debug
There is also a possiblity that this service is run before the device you need to restore is loaded, so it won't have any effect.
On a related note, did you install the program via your package manager, and what distro are you running. Because sometimes SELinux will block the program running. But the error message will say permission denied, instead of your message.
Running something at start-up can be done multiple ways:
- look into /etc/rc.d/rc.local
- systemd (or whatever init system you use)
- cron job
You got some good answers already, here is one more option: Create a *.desktop file to run sudo alsactrl, and copy the *.desktop file ~/.config/autostart (Might need to configure sudo to run alsactrl w/o password.)
IMHO the cleanest option is SystemD.
what is hyprland
why do ppl use the CLI for things like making and moving files? i find the GUI easier and faster as well as less prone to mistakes
what is wayland and xorg, and why does everyone argue about them
hyprland
A wayland compositor and tiling window manager. The lead developer of the project is a Polish transphobic workaholic.
why do ppl use the CLI for things like making and moving files? i find the GUI easier and faster as well as less prone to mistakes
If you understand how shell scripting works you can easily automate menial tasks. CLI is also an interface shared by all operating systems so if you know how to work around in a shell you're not bound to any particular workflow/desktop GUI. Keep using GUIs though, they exist for a reason.
what is wayland and xorg, and why does everyone argue about them
Both are display protocols that are in charge of displaying graphics to your screen. Xorg is over 30 years old while wayland is only about 15 years old. The polemic about xorg was that the codebase was unmanageable and the design architecture of the program was inherently flawed (example: screenlocker getting access to your entire screen including apps and desktop, making writing malware for x11 a 3 line python script). X11 was designed during a time when people were using actual real life terminals and mainframes. Wayland is much more modern and akin to how modern graphics APIs are handled (for the most part)
Wayland at its core has and always will be design by committee so a lot of the arguing is necessary (though sometimes long-winded) to make sure to not repeat xorg's mistakes. Protocols take months if not years to be merged into wayland and those protocols have to be implemented by wayland compositors themselves rather than sharing 1 program altogether like with xorg.
Watch this video for more information, explains it much better and is from an actual wayland board member.
Why YOU should write a Wayland compositor – Victoria Brekenfeld – HiP22 Berlin
it's faster for me to type out cp -r source/directory destination/directory than it is to open a file manager, navigate to my source, ctrl-a ctrl-c navigate to my destination, ctrl-v. this is not always true. look at the work done by the plan9 people to learn more
idk what hyprland is specifically, but it's either a window manager or compositor or something for use with wayland.
wayland and xorg are ways to do graphical user interfaces in unix systems. wayland is supposed to fix problems that have long been solved or worked around in xorg. it's new and doesn't workor support everything. xorg is old and has problems but it works very well.
Xorg is a display server for Linux ecosystem. Every ecosystem has a display server. It is what makes it possible for you to have graphical applications with movable windows that can talk to each other, or have a mouse cursor that can click on things.
Wayland is a replacement for Xorg because Xorg is old and its developers said an alternative is needed. Wayland has differences that I won't discuss here, but I'll be happy to do so if you ask.
Hyprland is a wayland compositor. A compositor is basically an implementation of wayland (there are many) and gives you a windowing system that you can run graphical applications through. It is usually a lot more minimal than having a full graphical desktop like KDE or Gnome.
Hyprland belongs to a class of comositors called "tiling", which forces windows to be in a tiling formation. In other words, windows do not overlap or stack on top of each other. Hyprland stands out in having a lot of eye candy and visual effects.
I use CLI for moving files, etc. After you use it for a while, you find out it can be more efficient, faster, and more pleasant to work with.
I have windows PC with 6 drives, mostly SSD and on HDD that I assume are all NTFS. Two of the drives are nvme(?) attached to the mobo, and I only have one mobo with nvme slots. I have a number of older boards that top out at SATA connections.
If I install Linux Mint, can I format one nvme drive with whatever the current preferred linux formatting is, install Mint, and move the files from the other drives around as I format each one?
Or do I need to move all the data I want to keep to SATA drives, put them in a different windows box, and then copy them over using a network connection?
It's been a while and I'm guessing my lack of finding an answer means linux still doesn't work with NTFS enough to do what I'm thinking of.
linux can read and write ntfs, edit partition tables and resize ntfs partitions
you could (theoretically, do not do this!) free up 8gb of space on your ssd in windows, defragment it then boot a linux installer and use it to shrink the ntfs partition and install ilnux in that 8gb.
It depends on exactly how you plan to do things. The Linux kernel supports reading NTFS but not writing to it. I’m not sure exactly how full your drives are, but you might be able to consolidate some before installing Linux.
There are a couple utilities that let your mount an NTFS file system for read & write, but I wouldn’t trust them for important data.
Edit: This is outdated as of like 2021. Don’t listen to me
As long as I can read from the second nvme drive I have enough total space to easily shuffle around.
My issue was that I couldn't fit everything onto just the SSDs at the same time.
Reading works great! If you need to mount the drive manually (IIRC Mint should do this for you) you’ll need to specify that it’s NTFS instead of it automatically detecting the file system but other than that it’s just plug and play
Linux NTFS support is pretty good. The kernel drivers do all the basics, but you may still want the ntfs-3g driver installed for some of its tools. Ntfsfix has saved me before and I think it’s from the ntfs-3g package
It is pretty great, but for now they are still mainly aimed at power users. I have used home manager for a bit, but I feel some module are not exactly well maintained, and using it is not exactly "maintenance-free". BTW, they pollute your hone dir like crazy, as if xdg has never existed.
I feel like nix is aimed as ease of deployment, but not the ease of maintenance especially for desktop use. However, I love atomic distros, they are on the part of the spectrum, you cannot replicate your setup exactly by copying a dir, but they are very easy to use, with sane default.
Like in Nix.
U write whole system config in a file or few (including grub,ssh,etc) then rebuild system and u have a system based on that config. There are projects for arch like blendos (the alpha release)
Someone gifted me with some old iPad that's more than 10 years old. What steps should I take to install Linux on the iPad?
You can't. Apple's iPads and iPhones are e-waste from the moment they run out of security and OS updates. Apple doesn't allow third party installations.
It's technically possible, but there aren't any proper instructions, I assume it's pretty complicated and you could break the device while doing this. Also, I don't think it would run particularly well, since there are no Linux drivers for Apple's proprietary hardware (except for M1 because it was reverse-engineered, but iPads use A-Series chips)
What's the difference between /bin and /usr/bin and /usr/local/bin from an architectural point of view? And how does sbin relate to this?
There's a standard. /usr was often a different partition.
/bin - system binaries /sbin - system binaries that need superuser privileges /usr/bin - Normal binaries /usr/sbin - normal binaries that require superuser privileges /usr/local/bin - for executables that aren't 'packaged' - i.e., installed by you or some other program system-wide
Executable programs! Bin-ary instructions for the computer to perform!
Also, technically these will not just have binaries. I should have said executable, really, because scripts are there, too.
Former FreeBSD user here. I always kept /usr separate, including /usr/home