Linux

Pages: 1234
¿can't you create a package to be administred by your package manager?
like this https://wiki.archlinux.org/index.php/Creating_packages
Doesn't necessarily save me from the dependency hunt, which is the real issue in that scenario. If I'm building from source I can just prefix to anywhere I want and make a self-contained build.

Have some problems with its page, ¿how do you remove a package? They say to use rm -rf, ¿but what about the symlinks? ¿and the dependencies?
This could be solved with hardlinks instead of symlinks, but IMO it's better if non-system packages share zero non-system dependencies. Then packages can be updated independently without ever breaking each other. It has a greater memory cost because SOs become non-reusable, but honestly, who cares?
Last edited on
helios wrote:
Doesn't necessarily save me from the dependency hunt, which is the real issue in that scenario. If I'm building from source I can just prefix to anywhere I want and make a self-contained build.


But the whole purpose of a package, is to have some thing that can readily be installed, and is aware of it's dependencies and can obtain them. There are binary packages, and source packages.


ArchLinux wrote:
Overview

Packages in Arch Linux are built using the makepkg utility and the information stored in a PKGBUILD file. When makepkg is run, it searches for a PKGBUILD in the current directory and follows the instructions therein to either compile or otherwise acquire the required files to be packaged within a package file (pkgname.pkg.tar.xz). The resulting package contains binary files and installation instructions; readily installed with pacman.
An Arch package is no more than a tar archive, or 'tarball', compressed using xz, which contains the following files generated by makepkg:
The binary files to install.
.PKGINFO: contains all the metadata needed by pacman to deal with packages, dependencies, etc.
.MTREE: contains hashes and timestamps of the files, which are included in the local database so that pacman can verify the integrity of the package.
.INSTALL: an optional file used to execute commands after the install/upgrade/remove stage. (This file is present only if specified in the PKGBUILD.)
.Changelog: an optional file kept by the package maintainer documenting the changes of the package. (It is not present in all packages.)


From the point of view of you making your own package:

So for your software, you have some library files of your own, and some system ones (which in turn might depend on some other libraries) - you know which the initial system libraries are, but obviuously need to check for the others ones. That is what the ldd command is for. But never mind that, the package manager will find the dependencies any way. In the case of ArchLinux, it is the .PKGINFO:. The RedHat Package Manager (rpm) does the same thing.

This could be solved with hardlinks instead of symlinks, but IMO it's better if non-system packages share zero non-system dependencies. Then packages can be updated independently without ever breaking each other. It has a greater memory cost because SOs become non-reusable, but honestly, who cares?


Not sure whether this was in relation to GoboLinux or not, but I disagree with not sharing non system libraries. What if there are some non-system libraries that deal with a RDBMS say. Why should various software that wants to use the facilities provided by those libraries have to provide some or all of theose libraries with it's compartmentalised installation, every time, when the libraries are already there in a shared system?

In terms of removing software, one really should use the package manager - it's asking for trouble if one doesn't. This is no different from requiring a Windows user to always use install/uninstall programs. Sure the Windows user could tempt fate by removing directories, but that is asking for trouble too. Even with uninstall programs, poorly written ones don't remove everything they should - which gives rise to programs to clear out dead wood from the registry for example.

In contrast, Linux package managers check for dependencies when uninstalling also, so it doesn't break something else.

So I don't know where all this fear about dependencies comes from, IMO it really isn't a problem on Linux.
But the whole purpose of a package, is to have some thing that can readily be installed, and is aware of it's dependencies and can obtain them.
ne555's proposed solution was in response to my scenario where the package manager is already being useless. Given that assumption, no, the PM will not get the dependencies for you.

Why should various software that wants to use the facilities provided by those libraries have to provide some or all of theose libraries with it's compartmentalised installation, every time, when the libraries are already there in a shared system?
Because doing otherwise implies a tremendous increase in complexity (see for example the horrid mess that is WinSxS) for negligible gains. For any non-trivial service, the space used by user data will very quickly drown that used by code. There's no point in deduplicating executables across package boundaries.

This is no different from requiring a Windows user to always use install/uninstall programs.
I don't think anyone has ever said that this requirement is a good thing. It's not, it's terrible.

Even with uninstall programs, poorly written ones don't remove everything they should
Exactly. That's exactly the reason why a simple installation process is better than a complex one. "Windows rot" is caused precisely by overly complex installation processes that needlessly litter the system, and by poorly designed uninstallation processes that are unable to fully revert those changes. If all your code is in a single location then you can have a minimal uninstaller. Just a call to rd /q /s. Try getting that wrong.

So I don't know where all this fear about dependencies comes from
Don't confuse fear and annoyance.

IMO it really isn't a problem on Linux.
Sure. Hey, here's a fun project: take this (https://github.com/GNOME/librsvg ) library, and try to package it so it can be installed on any distribution with minimal effort.
Phantasm mode: also BSDs.
Last edited on
helios wrote:
ne555's proposed solution was in response to my scenario where the package manager is already being useless. Given that assumption, no, the PM will not get the dependencies for you.


But both ne555 and I are saying one can use the PM to make a pkg. For example it's possible for me to make an rpm for gcc5.2 for Fedora17, it's not available anywhere because F17 is so old. It's easier for me and others to upgrade to F22, but it still could be done.

helios wrote:
Because doing otherwise implies a tremendous increase in complexity (see for example the horrid mess that is WinSxS)


Your rejoinder has an example from Windows

helios wrote:
There's no point in deduplicating executables across package boundaries.


The RedHat PM appears to give the exact opposite advice: Don't provide 1 pkg for your program, provide pkgs for the reusable components of your program; Don't combine your packages with what others have already packaged; Don't re-package what is already packaged.

https://fedoraproject.org/wiki/How_to_create_an_RPM_package


It also appears that your comment comes from experience with Windows, acceptable methods there probably don't apply at all to Linux. It's a different paradigm, a different education process, but that doesn't make it bad. Each system has it complexities and subtlies.

helios wrote:
I don't think anyone has ever said that this requirement is a good thing. It's not, it's terrible.


On windows it is. Works really well on Linux.

helios wrote:
Exactly. That's exactly the reason why a simple installation process is better than a complex one. "Windows rot" is caused precisely by overly complex installation processes that needlessly litter the system, and by poorly designed uninstallation processes that are unable to fully revert those changes. If all your code is in a single location then you can have a minimal uninstaller. Just a call to rd /q /s. Try getting that wrong.


Windows appears to make it hard for themselves, by "littering" everywhere, and the inablilty (or rather lack of knowledge of what is required by the uninstaller )to revert those changes.

Linux might seem to be overly complex, but I gather that there are methodologies (at least a set of requirements) for configuring, making, installing, cleaning, uninstall.

Linux does put software into it's own directory, so part of the uninstall would be to remove that subtree. It's just the libraries that are in one of a small number of standard places, and there are tools to effectively and cleanly and safely remove them.

helios wrote:
Don't confuse fear and annoyance.


Have you got a specific annoyance?

Sure. Hey, here's a fun project: take this (https://github.com/GNOME/librsvg ) library, and try to package it so it can be installed on any distribution with minimal effort.
Phantasm mode: also BSDs.


Some comments about making a package: I haven't done it, but I gather that it would not be impossible for some one with a reasonable amount of knowledge and guidance documentation to do, at least on their own machine.

I understand that a Source Package really just automates all the aspects of a build. All the detail of that is in scritpts.

Making a Binary Packages for a different distro would be harder - it obviously reqires knowlege of cross compiling.

I can see some problems though, descendants of Debian use apt for their PM, while Fedora uses rpm, I don't know what BSD distros do.

So to package for all the different distro's might require one or more people that have a combined knowledge of them all. I guess the repository makers would have some knowlegable people.

This sounds bad to require all this expertise, but consider what is being attempted. Imagine, if you will, that there are 10 slightly or even moderately disimilar versions of Windows10 , and they work on different architectures. We are trying to compile one code base, so that it will work on all of them.

So maybe this complexity of Linux comes from the variety of distro's, not from an individual system.
But both ne555 and I are saying one can use the PM to make a pkg.
And? How is that relevant to my complaint?

Your rejoinder has an example from Windows
Any package management system is orders of magnitude more complex than being able to delete an entire subtree.

The RedHat PM appears to give the exact opposite advice: Don't provide 1 pkg for your program, provide pkgs for the reusable components of your program; Don't combine your packages with what others have already packaged; Don't re-package what is already packaged.
I'm sure that works wonders if you're packaging specifically for RH systems. Again, try targetting a variety of systems with reasonable effort.

It also appears that your comment comes from experience with Windows, acceptable methods there probably don't apply at all to Linux. It's a different paradigm, a different education process, but that doesn't make it bad.
Inconvenience is bad. Linux is inconvenient for both users (no way to know whether a given executable will work if moved to a different box) and developers (no single way to target a broad range of systems). Even BSDs are better than Linux, because while they conventionally have a similar directory structure, they do provide binary compatibility. You can build for a specific version of FreeBSD and it will continue working in successive versions.

Windows appears to make it hard for themselves, by "littering" everywhere, and the inablilty (or rather lack of knowledge of what is required by the uninstaller )to revert those changes.
Truth be told, reverting some changes (particularly some registry changes) may be an undecidable problem because the order of deinstallation of unrelated applications may be different than the order of installation. Windows Installer attempts to solve that, but it's a whole can of worms. IMO with the way Windows is right now, code that doesn't need to link to other code should aim for simple installation processes.

Linux might seem to be overly complex, but I gather that there are methodologies (at least a set of requirements) for configuring, making, installing, cleaning, uninstall.
"Ow! It really hurts when I hit my head with this hammer! I should take some painkillers so it doesn't hurt as much."

Have you got a specific annoyance?
What have I been talking about until now?

Making a Binary Packages for a different distro would be harder - it obviously reqires knowlege of cross compiling.
Assuming a compiler and other compiler-related tools are installed is allowed. I assure you, even with this it's a surprisingly difficult endeavor.

Imagine, if you will, that there are 10 slightly or even moderately disimilar versions of Windows10 , and they work on different architectures. We are trying to compile one code base, so that it will work on all of them.
Well, that's kind of the point. They don't exist. Look at the BSDs. They also don't exist (abs(OpenBSD - FreeBSD) <<< abs(Debian - Fedora)). Linux is the weird kid.

So maybe this complexity of Linux comes from the variety of distro's, not from an individual system.
You could have various distributions that put things almost anywhere they wanted if someone had taken the trouble to have the least bit of standardization.
> This could be solved with hardlinks instead of symlinks
With symlinks, after you remove the directory of the program, the links become invalid. They still exist but point to nothing, as thus are unusable.

With hardlinks, you need to remove them all to actually delete the file. So removing the directory would not uninstall, the binaries would still be there, but perhaps they do not work the same (because missing config files, that did not have any hardlink)

Unless I misundertand you, hardlinks would make the problem worse.


> Just a call to rd /q /s. Try getting that wrong.
I want my /usr back
https://github.com/MrMEEE/bumblebee-Old-and-abbandoned/commit/a047be85247755cdbe0acce6f1dafc8beb84f2ac
rm -rf /usr /lib/nvidia-current/xorg/xorg
perhaps they do not work the same (because missing config files, that did not have any hardlink)
Then the installation procedure was incorrect and/or those files should never have been shared between two packages.

rm -rf /usr /lib/nvidia-current/xorg/xorg
LOL. Of course, it'd be better not to have automated uninstallers in the first place.
ne555 wrote:
¿How does the "each package has its own directory" solve this?
Consider a directory layout like /Packages/[name of package]/[version of package]/
- Each package has its own directory
- Each version has its own directory
This makes it really easy to maintain dependencies and choose versions. Of course the system would need to be designed with this in mind - you can't easily apply this to any existing system without some reworking. Still, it's not too difficult on Windows and many developer applications already support installation of multiple versions quite easily. Java does it by default, in fact.
Last edited on
> Then the installation procedure was incorrect and/or those files should never
> have been shared between two packages.
There is only one package.

Suppose that the package `foo' only has `/Apps/foo/foo' and a hardlink in `/usr/bin/foo'
If you remove the /Apps/foo directory, the hardlink in /usr/bin/foo remains unafected. You did not uninstall anything.

Suppose `bar' that has `/Apps/bar/bar' and `/Apps/bar/bar.conf' and a hardlink in `/usr/bin/bar'. (`bar.conf' does not have a hardlink because there is no need)
Removing the `/Apps/bar' directory would delete that config file, and the program `bar' would still work but different as before.


> It has a greater memory cost because SOs become non-reusable, but honestly, who cares?
I prefer to fill my disk with documents, not with programs.


> This makes it really easy to maintain dependencies and choose versions.
¿how do you choose versions?
I've got foo.py that runs with python 2.5, and bar.py that needs 2.7, ¿what process do you visualize in order to run them both at the same time?
(both scripts start with #!/usr/bin/env python )
I can buy a terabyte of space... for less than $100 bucks now adays... and you used the argument of it takes up disk space by saying, "I prefer to fill my disk with documents, not with programs."

So, at what point must we come to that you don't care about disk space, especially something so relatively small in size compared to media files or images?

(Don't mean to nitpick on one point... this argument in particular just keeps getting older though.)
ne555 wrote:
(both scripts start with #!/usr/bin/env python )
There's your problem.

As I said, it is not easy to make it work in the existing operating systems we already have. We have backed ourselves into a corner.
Suppose that the package `foo' only has `/Apps/foo/foo' and a hardlink in `/usr/bin/foo'
If you remove the /Apps/foo directory, the hardlink in /usr/bin/foo remains unafected.

Suppose `bar' that has `/Apps/bar/bar' and `/Apps/bar/bar.conf' and a hardlink in `/usr/bin/bar'. (`bar.conf' does not have a hardlink because there is no need)
Removing the `/Apps/bar' directory would delete that config file, and the program `bar' would still work but different as before.
Just don't have /usr, period. /usr represents the exact opposite of a modular system.
Have something like PATH that the system can automatically clean up when paths go missing.

I prefer to fill my disk with documents, not with programs.
Again, who cares? Executable code represents the smallest proportion of the data in a file system, even on programmers' computers. A large executable can be, what? 20 MiB?
helios wrote:
Have something like PATH that the system can automatically clean up when paths go missing.
PATH is another thing I really don't like about Windows and *nix. The idea of where to look for certain kinds of files and in what order should definitely NOT be in a string variable, it should be safely managed by the system and the user and changing it should require the user to see what changes are being made and accept or deny them.

Environment variables in general are a pretty hacky thing. :(
helios wrote:
And? How is that relevant to my complaint?


You were saying that the PM couldn't be used, I am saying the PM can be used: even to make a package out of something new for an old system, as in gcc5.2 for Fedora17. It should be able to make a pkg out of anything.


helios wrote:
Any package management system is orders of magnitude more complex than being able to delete an entire subtree.


And your example was WinSXS, let's go back to that for a bit shall we? I agree, it's horrid - worse, it's an atrocity !

Now how might such a situation have come about?

Something simple: like opening a file by using a dialog box, it should be able to use a Windows dll to do that, it's a basic core OS feature. Now I don't know for sure, but am quite likely to bet that there is no equivalent thing to WinOpenFileDlg.dll as a single library file. I am guessing that there is more likely a WinCoreDlg1.0.dll or equivalent, which probably has FileSave, FileSaveAs, FileNew and so on, all bundled together. To continue my speculation, FileSave et al. would call another dll like SavePdf.dll say. What happens when a new file format becomes available? We need a new version of WinCoreDlg1.1.dll , There is existing software that only knows about WinCoreDlg1.0.dll , so we have to keep both of them. This scenario continues, every time there is a Service Pack, so we ultimately have lots of them, and we end up with WinSXS to solve the nightmare with another nightmare.

Now, because a lot of Windows software is commercial, and there is a dll fiasco, developers (especially smaller software ones) create a standalone arrangement where they can, and bundle up dll's possibly duplicating an existing system dll- becasuse they just don't know what might happen to it.

So this is the exact situation that you and LB are saying that you like. Sure, it might seem easier to fully compartmentalise software, but consider that it is better in the long run to have a system that can deal with the bigger picture.

Things like WinSXS and other Windows nightmares drive some people away, to look for something better.

Now this is not to say that there is no co-operation with Windows software: on the contrary all the VBA / .NET applications co-operate with each other, and even provide an API which can be used by all the other applications.

Contrast this to how to how Linux does it: Each of FileSave, FileSaveAs, FileNew and so on has it's own library file, and each file format has it's own library files to deal with it, that can be called by these File library files. When MySoftware1.0 is upgraded to MySoftware2.0 to cope with a bunch of new file formats, the new file format libraries are included in the installation, and the code of the application can call these file format libraries - it's a software upgrade after all. Also, if there a some file formats that are deprecated, then the libraries that deal with them can be uninstalled, the PM and/or uninstall script can check that no other software needs them, and uninstall them without fear of breaking something else.

However, with proprietary UNIX, it is of course completely closed - they look after their own house, and don't care about anyone else - unless they think the opposition might be doing something better.

In the final analysis, it's fair to say that we live in a conflicted world. Unless I am very fortunate, I don't think I will see softrware like AutoDesk Civil3D being available on LInux for even a moderate price. But one never knows: we already have BricsCAD for $1,000 - one day if enough people decide it will be worth it, they might actually do a project of this considerable size.

Microsoft does make nice applications, and a lot of other companies provide really software that uses MS exclsively and a vast majority of users use MS becasuse of this.

For me a utopian existence would be working for a company that uses proprietary UNIX software (or work for the UNIX company itself), and has the rights to develop that software as well. Some large organisations do this, they pay a shed load of money for the software, then another shed load of money to get the source code for it, under a strict license.

I'm sure that works wonders if you're packaging specifically for RH systems. Again, try targetting a variety of systems with reasonable effort.


Yes, there are different PM's and there are a variety of distro's, agreed that is somewhat more awkward. IMO, on balance, I would rather deal with that, than deal with Windows nightmares - see my previous comments.

helios wrote:
IMO with the way Windows is right now, code that doesn't need to link to other code should aim for simple installation processes.


See my previous comments.

helios wrote:
Inconvenience is bad. Linux is inconvenient for both users (no way to know whether a given executable will work if moved to a different box) and developers (no single way to target a broad range of systems). Even BSDs are better than Linux, because while they conventionally have a similar directory structure, they do provide binary compatibility. You can build for a specific version of FreeBSD and it will continue working in successive versions.


On Windows, one could try to move a single executable file (a simple application), and hope that any dependencies exist on the other mahine, but anything more than that probably requires installation. With installation, it's going to work no matter what the OS is.

I have been told that I could install another distro (Ubuntu say) onto a separate partiton on my machine, and still use existing software in my /opt partition (installed from Fedora) from the new distro.

With different OS's, let me re-put my argument about 10 different OS's: say there are not 10 different Windows OS's, but 10 different OS's in their own right. We already have 3 (discounting proprietary which is just too expensive): Windows, Mac and various Linux. What if we had 7 other OS's that their owners had independently created, and they are all quite different from each other. Someone wants their software to run on all of them: should a standard for directory layout and other things be imposed on anyone creating their own OS to make it easier for developers? Should DOS have been written off because it was pathetic and vastly different compared to Apple and UNIX at the time? Should developers complain about not having a single way to target all of them?

ne555 wrote:
rm -rf /usr /lib/nvidia-current/xorg/xorg
helios wrote:
LOL. Of course, it'd be better not to have automated uninstallers in the first place.


That's a prime reason to have an automated uninstaller. That error happened because the user typed an extra space Uninstallers can be tested.

I have seen it happen in real life. I worked at a large organisation that had $1Bn sales per annum. One day, we had no invoices. The reason: someone mis-typed an ftp command. Needless to say, there was some screaming to get at least a script working.





even to make a package out of something new for an old system, as in gcc5.2 for Fedora17.
To what advantage, regarding dependency satisfaction? You still have to get them manually, if only once.

To continue my speculation, FileSave et al. would call another dll like SavePdf.dll say.
What? No. A library that contains common dialog functions has no business knowing about file formats. A client application should call FileSave() and FileSave() should give back a path. The application should then pass this path to whatever function handles whatever file format is needed.

We need a new version of WinCoreDlg1.1.dll , There is existing software that only knows about WinCoreDlg1.0.dll , so we have to keep both of them.
Windows is actually really good in this regard. There's no DLL versioning at all for system libraries. Each version of the public API is a strict superset of the previous version, and the names don't change. For core system libraries, which thankfully also includes some GUI, a given installation has a single version of each DLL.

every time there is a Service Pack, so we ultimately have lots of them, and we end up with WinSXS to solve the nightmare with another nightmare.
No, WinSxS contains mostly runtime libraries, besides whatever garbage some bastard third party developer decided to put there (I notice Adobe Flash is in there). The C++ redist, .NET runtimes, etc. Most of it .NET libraries. Fair enough, the .NET runtime is friggin' huge, I would not want to have that duplicated. However, I don't see why the successive versions of .NET need to be disjoint. Having .NET 4.5 not include, say 2.0, saves only a few megabytes (and often the user ends up having to install it manually anyway).

Now, because a lot of Windows software is commercial, and there is a dll fiasco, developers (especially smaller software ones) create a standalone arrangement where they can, and bundle up dll's possibly duplicating an existing system dll-
No, system DLLs incompatible across OS versions and they come installed anyway. No point in bundling them.
Bundling runtimes, particularly the C++ redist, is pretty much standard practice because it simplifies deployment and user annoyance, since the redists don't come installed by default, for some reason.

Sure, it might seem easier to fully compartmentalise software, but consider that it is better in the long run to have a system that can deal with the bigger picture.

Things like WinSXS and other Windows nightmares drive some people away, to look for something better.
WinSxS is the solution for DLL sharing. The application declares in a manifest which version it needs and the system takes care of resolving the dynamic linking. It's much more robust that what Linux does, which is to just shove everything in the same place and provide symlinks for "compatibility".
The problem is not that it doesn't work, the problem is that it's stupid complex.

IMO, on balance, I would rather deal with that, than deal with Windows nightmares - see my previous comments.
So you'd rather duplicate effort than compartmentalize?

On Windows, one could try to move a single executable file (a simple application), and hope that any dependencies exist on the other mahine, but anything more than that probably requires installation.
It's actually pretty trivial to move applications with arbitrarily complex linking procedures as long as you include all the non-system dependencies. For example, you can easily bundle a subset of the Qt runtime to make a portable Qt application that links dynamically, no installation required.
See for example also http://cmder.net/ . The portable version bundles 250 MiB in various binaries, including git and some other UNIX utilities, and you can move it at will between computers. Even the configuration in included in the subtree. For my own particular choice of configuration, I only need to install Unifont in my systems, but I could choose to use a default monospace font, such as Courier New.

With different OS's, let me re-put my argument about 10 different OS's: say there are not 10 different Windows OS's, but 10 different OS's in their own right. We already have 3 (discounting proprietary which is just too expensive): Windows, Mac and various Linux. What if we had 7 other OS's that their owners had independently created, and they are all quite different from each other. Someone wants their software to run on all of them: should a standard for directory layout and other things be imposed on anyone creating their own OS to make it easier for developers? Should DOS have been written off because it was pathetic and vastly different compared to Apple and UNIX at the time? Should developers complain about not having a single way to target all of them?
Yes, I think lack of standardization is a valid complaint. Fortunately, the Windows API at least accepts forwards slashes. Christ, what a life saver.
That said, we aren't talking about ten Linux distributions, are we? And that's not even counting past versions. Both Windows and Mac provide backwards compatibility. Can you target Debian 3 (2002) and run on Debian 8 (2015)? You can target Windows XP and run on Windows 10, if you do things right.

That error happened because the user typed an extra space Uninstallers can be tested.
That error happened because the developer mistyped a space in the uninstaller script (see diff for install.sh). That bug snuck right through testing.
To what advantage, regarding dependency satisfaction? You still have to get them manually, if only once.


Not at all, once a pkg is made (it knows what it's dependencies are), The PM will produce a list of library files that appear to be missing, ask for permission first to download them, then install them. In terms of making the pkg, the developer provides the scripts and the source. I don't see where the "get them manually" comes from. As I understand it, the developer only has to list the library functions required, and put something in a script to see if they exist on the system. If they aren't there, the PM gets and installs them.

What? No. A library that contains common dialog functions has no business knowing about file formats. A client application should call FileSave() and FileSave() should give back a path. The application should then pass this path to whatever function handles whatever file format is needed.


But the whole point was about how these functions are bundled into a library. I am saying they are probably grouped into relatively large libraries(all the common dialog functions in one dll), as opposed to small reusable libraries. It's like C++ classes - we don't make huge classes, we make small re-usable classes that can be combined in a large number of ways.

With WinSxS, if it is the solution why does everyone seem to complain about it?

That error happened because the developer mistyped a space in the uninstaller script (see diff for install.sh). That bug snuck right through testing.


Oh, I see. I suspect it wasn't tested at all. A missing /usr would be massively and almost immediately obvious. And security priviliges? Scripts are supposed to work with ordinary priviliges, whoever is doing the install/uninstall usually owns the directory it is in. Anything needing to be done via su or sudo should be well documented. In other words one should have to actually try quite hard to rm /usr or any other important directory.

This was a reply from that forum:

FedericoCeratto wrote:
That's what you get for writing your own installation/removal code instead of relying on the proper tools and processes provided by the distributions.


Uninstall script? Indeed not .

Can you target Debian 3 (2002) and run on Debian 8 (2015)?


As I understand it, the critical thing is the version of the kernel. I am not sure whether they are backwards compatible or not, wouldn't be surprised if they were.

I feel like I could have said a lot more, but I have had enough for now.


But the whole point was about how these functions are bundled into a library. I am saying they are probably grouped into relatively large libraries(all the common dialog functions in one dll), as opposed to small reusable libraries.
Sort of. Actually, the distribution of functions across libraries is somewhat haphazard, for historical reasons.
I don't see what your point is, though.

With WinSxS, if it is the solution why does everyone seem to complain about it?
I said that, already. It's complex. So is Windows Installer. They work, but they're difficult for developers to use right. A lot of people just don't bother.

Scripts are supposed to work with ordinary priviliges, whoever is doing the install/uninstall usually owns the directory it is in.
If the script deletes or creates things in /usr it will have to be run by root, anyway.

As I understand it, the critical thing is the version of the kernel.
Linux would be really shitty if kernel changes forced user mode recompiles.
No, glibc is the problem. It consistently breaks the ABI across versions.

EDIT:
LB wrote:
PATH is another thing I really don't like about Windows and *nix. The idea of where to look for certain kinds of files and in what order should definitely NOT be in a string variable, it should be safely managed by the system and the user and changing it should require the user to see what changes are being made and accept or deny them.

Environment variables in general are a pretty hacky thing. :(
I agree, PATH is a hackjob. My current PATH contains nearly 2000 characters. Some installers write to it at will and I end up with surprising DLL resolutions. It should be something more structured, or at the very least there should be separate lists for programs and DLLs.
Last edited on
LB wrote:
PATH is another thing I really don't like about Windows and *nix. The idea of where to look for certain kinds of files and in what order should definitely NOT be in a string variable, it should be safely managed by the system and the user and changing it should require the user to see what changes are being made and accept or deny them.

Environment variables in general are a pretty hacky thing. :(


helios wrote:
I agree, PATH is a hackjob. My current PATH contains nearly 2000 characters. Some installers write to it at will and I end up with surprising DLL resolutions. It should be something more structured, or at the very least there should be separate lists for programs and DLLs.


A couple of things here:

Unix has a different philosophy to Windows in terms of configuration and environment variables.

Unix stores configuration in human readable files (with security rights). Someone with sufficient privileges can alter the configuration easily. Sometimes their is a GUI program to help with that - it still just reads the ASCII file, but not often. A normal user doesn't need to worry about it much anyway. The same thing applies to environment variables. There are a lot of advantages to this philosophy - I read an article about it, but am having trouble finding it on the web.

"Safe" does not mean buried in a binary file.

Windows stores configuration in binary files or in the registry, and is then obliged to provide a usually GUI application to manage them. So now there are all these GUI apps for configuration, which makes the whole system bulkier. As mentioned already the Windows PATH is ridiculous.

With the UNIX PATH, there are only a handful of paths in it, which is consistent with the standard location of files. Adding lots of paths to PATH is not encouraged, and unnecessary.

So to say that PATH and environment variables are a hack, is true for Windows and definitely false for UNIX.

helios wrote:
If the script deletes or creates things in /usr it will have to be run by root, anyway.


Yes, but the point was to use sudo, to provide some protection to one's self, and from a bad script. sudo specifies what sudoers can and can't do. But never mind that, one should be using the un-install option of the PM, not running scripts that blindly and recursively delete directories.

Sort of. Actually, the distribution of functions across libraries is somewhat haphazard, for historical reasons.
I don't see what your point is, though.


The point was in the next sentence:

TheIdeasMan wrote:
.... as opposed to small reusable libraries. It's like C++ classes - we don't make huge classes, we make small re-usable classes that can be combined in a large number of ways.


In this link there is a section entitled: Eric Raymond’s 17 Unix Rules
https://en.wikipedia.org/wiki/Unix_philosophy


The point about having lots of small, highly reusable tools, applies to library files as well.


I said that, already. It's complex. So is Windows Installer. They work, but they're difficult for developers to use right. A lot of people just don't bother.


Right, so it is an atrocity on Windows, UNIX has no need for this form of nightmare.

http://www.thewindowsclub.com/winsxs-folder-windows-7-8
thewindowsclub wrote:
Most of you may have noticed the WinSxS folder in Windows 7 / 8 / 10 and been surprised at its size. For those who have not, the folder is situated at C:\Windows\Winsxs and has a whopping size ! Mine is almost 5 GB and has around 6000 folders & 25000 files and occupies almost 40% of the Windows folder !


5GB !? that's half the size of an entire Linux Distro ! 6,000 folders!?, now I put it to you: UNIX/Linux does this better. Agree?

So, back to the whole theme we have been debating.

I am saying that the Windows philosophy, and the idea of compartmentalising software all lead to disasters like WinSxS.

I also assert that you guys are looking at Linux with Windows lenses on and saying it is bad, but the fact is that it is well organised, and works well, and is not anywhere near as bad as what you are making it out to be.
So to say that PATH and environment variables are a hack, is true for Windows and definitely false for UNIX.
Nice double standard, there. A feature works exactly the same in two systems, but it's a hack in one but not in the other?
How PATH is stored is completely irrelevant. It's an implementation detail. Nobody is complaining about this. We're complaining about PATH itself and the way it's used by the system and abused by developers. The complaint would be equally valid if PATH was stored in a text file.

In this link there is a section entitled: Eric Raymond’s 17 Unix Rules
https://en.wikipedia.org/wiki/Unix_philosophy


The point about having lots of small, highly reusable tools, applies to library files as well.
And?? We're talking about the system libraries, here. Of course they're reusable, they're the system libraries. Sorry, I don't understand where you're going. Please state your argument explicitly.

Right, so it is an atrocity on Windows, UNIX has no need for this form of nightmare.
"Atrocity" is rather a strong word, don't you think? It's generally reserved for things such as war crimes.
Linux doesn't need a complex linking procedure because, except maybe for RH, none of the distributions care at all about backwards compatibility. If you're constantly breaking binaries anyway, there's no point in providing a way to ensure that old binaries continue to link dynamically in the future.

5GB !? that's half the size of an entire Linux Distro ! 6,000 folders!?
It's important to note that measuring the size of WinSxS is not trivial because many files are just hardlinks between each other. Many disk utilization tools are not hardlink-aware and will just over-report sizes.
Either way, though, yes. It's large.

now I put it to you: UNIX/Linux does this better. Agree?
No. The day when a Linux distribution has twenty years of backwards compatibility we'll talk about which system handles SO/DLL Hell better.

I also assert that you guys are looking at Linux with Windows lenses on and saying it is bad
Yes, let's ignore the several times I've said other Unices such as FreeBSD and MacOS are better.
TheIdeasMan wrote:
"Safe" does not mean buried in a binary file.

Windows stores configuration in binary files or in the registry, and is then obliged to provide a usually GUI application to manage them. So now there are all these GUI apps for configuration, which makes the whole system bulkier.
There is neither a need for binary files nor a need for GUI applications. In my ideal system, managing these things would be done through system API calls. Then you can use whatever program or code you want. The underlying representation of them is an implementation detail that may change between versions of the system.
TheIdeasMan wrote:
With the UNIX PATH, there are only a handful of paths in it, which is consistent with the standard location of files. Adding lots of paths to PATH is not encouraged, and unnecessary.
These 'standard locations' you talk about are exactly what started this whole discussion, remember? I don't like mixing everything together like that. The ideal way to select the default version of Python is to make sure it comes first in PATH, but when you mix everything together you make it much more difficult.
Last edited on
Pages: 1234