Sorry, it's not a complete instruction, just a thought. It occurred to me (some time ago) that Python's virtualenv is, essentially, a simplified version of system "prefix", it has bin, lib, include, and can have more stuff when needed. If you're willing to experiment (you'll probably have to set a few additional environment variables and/or build flags but that's no big deal), you can install various other tools there up until you have a complete system with its own compiler and complete set of libraries although it's much simpler to keep using system compiler and libraries only complimenting them when needed.
Granted, prefixes are nothing new, people were using /opt (and their home directory) this way since the beginning of time. But with little help of virtualenv-wrapper or pyenv you can easily switch between them and isolate environments better. Binaries and stuff installed in virtualenv would override system defaults but only when venv is activated. Sounds like a good thing to me.
What use-cases are there? I'll start with ones that originally occurred to me. First is installing libs and tools in user directory in a controllable way. Sometimes you need to install stuff (especially pythonic stuff) for very basic things. Neovim, for one, needs several Python packages to function normally. The official recommendation is to install them with sudo pip install but I can hardly recommend it, installing anything system-wide with third-party package managers is generally a bad idea. Even though pip installs its packages in a separate site-packages directory, you'll still have a hard time maintaining it, it's hard to list all packages that are installed that way, it's hard to upgrade everything (pip-tools can help but still), it's hard to uninstall stuff without traces. So why not install it all into a neovim-specific virtualenv? Maybe even compile neovim itself there so that you don't have to rely on other repositories (their official PPA is mostly ok, but it's harder in other distributions). Same with any other tools, especially Python-related ones. You can keep requirements files in your dotfiles and put some shims into your ~/bin so you don't have to activate/deactivate all the time (rcm hooks can be used to run the actual install commands, I love rcm, by the way).
Second possible use-case is system libraries. Many Python packages require system libraries (and headers!) to build. lxml, psycopg2, pillow, the list goes on. If you don't have required headers yet, hopefully you'll get some informative error message on install. Or you'll see information in project's README. After a while, you just remember to always install libxml2-dev, libjpeg-dev, python-all-dev, etc. on any new server. But why not just install them into virtualenv? Admittedly, it would require a bit longer time for compilation but you'll at least be sure that you use the same versions of libraries across your systems (ok, this one is less important now that everyone and their father uses docker or some similar tools, and even use of virtualenv is possibly diminished with the rise of containers, although there's no particular reason why you can't containerize virtualenvs... but I digress).
Somewhat related to both, I had an idea to experiment with implementing a window manager and other GUI tools using pygtk (or is it pygobject now? I haven't been following) but installing it into virtualenv is a serious problem. Even if you can do most of the stuff using system's site-packages, things like testing against multiple versions of libraries and python would be a compete mess. Containers would probably help if I get serious about it, but some lighter isolation sounds much better.
It's all might be not very perfect for distributing stuff to users, of course, but when you're developing something or when you can't rely on distribution' repositories for any reason it sounds like a better idea than most others. In particular I hate it when people I don't know and have no reasons to trust say me "run this long and obscure script as root so it installs our stuff deep in your system in a way that makes it hard to remove", user-directory installs which you can easily list/review/update/purge are way safer. So, something to replace those notorious curl -s example.com/install.sh | sudo bash is yet another use-case. Most new systems now come with python3 and virtualenv anyway. It might even grow up to become another linuxbrew some day (great part is it can deliver binary packages as well so it won't necessarily require much compilation; and probably we'd want to use a separate package index for non-python-related ports) or even replace system package manager at least for some stuff (which would probably require environment multiplexing, meta-packages, and probably better dependencies control), but that's getting ahead of time.
(On a separate note, it would be great to use github as a replacement for pypi. Some trusted build service could be used to build binary packages (and generate a signature containing both git commit hash and binary tarball hash) so pip could verify that package wasn't tampered with and a particular revision of source code was used. Trust doesn't necessarily require hosting actual files yourself. I always thought that hosting a whole linux distribution entirely on github is an interesting idea. Although I'll probably never time time or community. But that's largely unrelated.)
In any case, virtualenv is great and we should do more of it. I think critics who think it's too hacky and complicated and unnecessary just don't understand it.
Granted, prefixes are nothing new, people were using /opt (and their home directory) this way since the beginning of time. But with little help of virtualenv-wrapper or pyenv you can easily switch between them and isolate environments better. Binaries and stuff installed in virtualenv would override system defaults but only when venv is activated. Sounds like a good thing to me.
What use-cases are there? I'll start with ones that originally occurred to me. First is installing libs and tools in user directory in a controllable way. Sometimes you need to install stuff (especially pythonic stuff) for very basic things. Neovim, for one, needs several Python packages to function normally. The official recommendation is to install them with sudo pip install but I can hardly recommend it, installing anything system-wide with third-party package managers is generally a bad idea. Even though pip installs its packages in a separate site-packages directory, you'll still have a hard time maintaining it, it's hard to list all packages that are installed that way, it's hard to upgrade everything (pip-tools can help but still), it's hard to uninstall stuff without traces. So why not install it all into a neovim-specific virtualenv? Maybe even compile neovim itself there so that you don't have to rely on other repositories (their official PPA is mostly ok, but it's harder in other distributions). Same with any other tools, especially Python-related ones. You can keep requirements files in your dotfiles and put some shims into your ~/bin so you don't have to activate/deactivate all the time (rcm hooks can be used to run the actual install commands, I love rcm, by the way).
Second possible use-case is system libraries. Many Python packages require system libraries (and headers!) to build. lxml, psycopg2, pillow, the list goes on. If you don't have required headers yet, hopefully you'll get some informative error message on install. Or you'll see information in project's README. After a while, you just remember to always install libxml2-dev, libjpeg-dev, python-all-dev, etc. on any new server. But why not just install them into virtualenv? Admittedly, it would require a bit longer time for compilation but you'll at least be sure that you use the same versions of libraries across your systems (ok, this one is less important now that everyone and their father uses docker or some similar tools, and even use of virtualenv is possibly diminished with the rise of containers, although there's no particular reason why you can't containerize virtualenvs... but I digress).
Somewhat related to both, I had an idea to experiment with implementing a window manager and other GUI tools using pygtk (or is it pygobject now? I haven't been following) but installing it into virtualenv is a serious problem. Even if you can do most of the stuff using system's site-packages, things like testing against multiple versions of libraries and python would be a compete mess. Containers would probably help if I get serious about it, but some lighter isolation sounds much better.
It's all might be not very perfect for distributing stuff to users, of course, but when you're developing something or when you can't rely on distribution' repositories for any reason it sounds like a better idea than most others. In particular I hate it when people I don't know and have no reasons to trust say me "run this long and obscure script as root so it installs our stuff deep in your system in a way that makes it hard to remove", user-directory installs which you can easily list/review/update/purge are way safer. So, something to replace those notorious curl -s example.com/install.sh | sudo bash is yet another use-case. Most new systems now come with python3 and virtualenv anyway. It might even grow up to become another linuxbrew some day (great part is it can deliver binary packages as well so it won't necessarily require much compilation; and probably we'd want to use a separate package index for non-python-related ports) or even replace system package manager at least for some stuff (which would probably require environment multiplexing, meta-packages, and probably better dependencies control), but that's getting ahead of time.
(On a separate note, it would be great to use github as a replacement for pypi. Some trusted build service could be used to build binary packages (and generate a signature containing both git commit hash and binary tarball hash) so pip could verify that package wasn't tampered with and a particular revision of source code was used. Trust doesn't necessarily require hosting actual files yourself. I always thought that hosting a whole linux distribution entirely on github is an interesting idea. Although I'll probably never time time or community. But that's largely unrelated.)
In any case, virtualenv is great and we should do more of it. I think critics who think it's too hacky and complicated and unnecessary just don't understand it.