A principal challenge of scientific computing is constructing a optimal and distributed system of hardware and software. Most software development is local, but most code is not executed on your local machine. There is no one solution to this problem. The goal of this page is to help inform you and to give you the tools to get more information.
You will likely need to connect to an account on another machine, and this principally happens with SSH. You should make sure you are familiar with SSH keys, how to generate them, how they are stored on your local machine, and how they are used to make connections. I have a separate page for explaining how those work.
Once you are logged in, you will likely be using a UNIX-like system. For better or worse, Unix systems behave slightly differently depending on what "shell" you use. The "shell" is the program which acts as an interface between you and the remote system. Common choices are bash and csh. There are several tutorials around the web for learning how to use UNIX-like terminals. You should know, in particular, how to set and show environment variables.
Once you are logged in and can access files, you may also need to know how to edit files inside the terminal on the remote machine, and this often requires a text editor. Common choices are nano, emacs, vim, and others. I recommend finding a text editor which is widely available and which you also like.
It is often incredibly useful to be able to open a window from the remote machine directly on your local machine. In order to do this, you will often need to install extra software on your local machine which can render the windowing information (e..g Xquartz on MacOS). Once you are familiar with ssh, you can use the -Y flag to ensure that windows open on your local machine, e.g. ssh -Y -l remote_user_hame host.name.edu.
Almost all work is collaborative, so you will inevitably want to share your code with your collaborators. The best way to do this is a with a code repository. Subversion (svn) and git are common packages which create code repositories. These repositories may be hosted in the cloud, e.g. at github.com, gitlab.com, bitbucket.com, etc., or on a local server at your institution. I have a separate page for explaining how those work.
Even if you don't have a a code repository, others with an account on the same machine can look at or edit code in your directory as long as you have given them permission to do so. The UNIX command chmod handles these permissions and you should familiarize yourself with that command as well as chown which allows you to change the ownership of a file.
Technology for transferring data files depends strongly on the size of file and the kind of system you are accessing. Also, while small data files can be stored in repositories, large data files are better handled separately.
Transferring small files or directories with no consideration for any sort of synchronization is typically handled with scp. Note that scp, by default, will happily overwrite files on either the remote or the local machine, so choose your filenames carefully. The scp program operates similar to ssh, so this page may help.
Another option for sharing files is to put them on a webserver. This can be particularly efficient, e.g., if you want to periodically update an image and then view it later in the browser.
If you want to synchronize several files and/or directories across two machines (and you don't want to store them in a repository, e.g. because they are too large), you can use rsync. The rsync command tries to be intelligent about only transferring files which have changed, so it can be a good solution for backups.
If you have large files you want to transfer to and from HPC systems, you should take advantage of the file transfer systems, e.g. globus, which are available to you. Often scp and rsync also work on these systems (but are sometimes slower).
Another principal challenge is handling code dependencies and creating a manageable software stack. In some cases, there isn't a particular method which always works for everyone so I just warn you about some of the possible pitfalls below.
For C/C++ programs and libraries, installation instructions vary significantly, but the program or library documentation should help you with this. Pay particular attention to is the final location of the files which are being installed. If you want to install different versions, then you may want to modify the installation procedure to ensure that the files are being installed in different locations. Some C++ libraries or programs require setting environment variables in your shell. A particular one to pay attention to is LD_LIBRARY_PATH which is where the system looks for shared libraries at runtime.
If you use a package manager on MacOS, either homebrew, or macports, or fink, be careful about mixing them on the same system, as they sometimes are incompatible with each other. I have found homebrew to be the best package manager for MacOS.
There are also several ways of installing python packages and several different package managers. Conda and pip are relatively popular. (You can sometimes use both, but some care has to be taken in mixing them.) On Linux systems, there are often python packages which are installed by the Linux package manager which are altogether different. For example the python3-h5py package on Ubuntu (installed with sudo apt install python3-h5py) acts differently from the pip version (installed with pip install h5py).
System software updates can sometimes break your programs which depended on older packages and this can cause difficulties. Nevertheless, for security reasons, it is important to keep your system software/OS fully updated at all times whenever possible. One way to resolve some of these headaches is through virtualization, which I discuss briefly below.
For python, you can create virtual environments using venv (see this documentation). This works very well.
A more drastic (but also potentially more powerful) solution is to create an entirely new virtual machine on your local (or remote) system which mimics some other machine or installation. This is typically done using docker. Docker requires sudo privileges, so it is often better for creating a virtual machine inside your local machine rather than trying to create a virtual machine inside a remote system. Docker installation on MacOS is a bit complicated, but this link has an excellent explanation.
Back to Andrew W. Steiner at the University of Tennessee.