Confused by jargon, or looking for more detailed descriptions of technical terms? Our backup glossary can help. Read on for a plain English overview of some essential backup and networking vocab, or browse through our drop-down list of terms (below) if you want to jump straight to something specific. 


3-2-1 backups

The 3-2-1 backup method is an approach to data storage that keeps copies of your data in three separate locations - online, on an external hard drive in your immediate location, and on a NAS device or external drive stored in a different geographical area.

Check out our Ultimate Online Backup Guide for more info on the 3-2-1 approach.


Archiving is a long-term Cloud storage technique that lets you save space on your hard drive by storing files that you don’t want to keep on your PC.

For more information on different archiving methods and their costs, take a look at our guide to Everything You Need to Know About Personal Data Archiving.


Asymmetric-key (public-key) encryption

In asymmetric-key encryption, the decryption key is entirely separate from the encryption key, although the two are mathematically linked. This means that examining the encryption key won’t reveal the information needed to decrypt your data, so it can be transferred publicly or via channels that you may not be certain are secure. It is the opposite of symmetric (private-key) encryption and is more secure, although slower, than the alternative.

RSA is the most widely used asymmetric algorithm.


Authentication is the method used to prove that a user is authorized to access data - usually by manually entering a username and password or similar info. This process is called user authentication.

There is also machine authentication, which checks the details of your computer before granting access to content - a popular solution for services that, for example, only allow you to register one device.



Bandwidth is very similar to speed, although it measures something slightly different -namely how much traffic can pass through an internet connection at any one time, rather than the maximum speed at which that traffic can move. So if you want to transfer 100MB worth of files, sending them across a connection with a 30Mbps bandwidth cap will allow them to reach their destination quicker than if you send them with a limit of 8 Mbps, as there will be much less queuing time.

If you’d like to know more, check out our introduction to understanding transfer speedsin the BestBackups Ultimate Online Backup Guide.


A cipher is the encryption algorithm used to turn your plaintext data into a random set of characters, unreadable without a decryption key. Common ciphers used by backup providers include Blowfish, RSA, and the current industry standard - AES.

For more details, check out our Ultimate Online Backup Guide’s Security section.



Encrypted data.


Compression is a process used to maximize storage space, speed up file transfers and limit bandwidth use by reducing the number of bits that are used to represent data. This can be done in two ways - the first being lossy compression, which discards unnecessary information and creates new representations of files based on approximation, rather than by perfectly recreating the original. This approach is faster and can result in smaller files, but often comes at the cost of quality - particularly when used to compress video or image files.

The second approach is lossless compression, which is slower, but produces higher quality results by converting files into formats such as .ZIP that reduce the amount of space being used, while still allowing data to be fully reconstructed in its original form.


Data center

A large collection of computer servers used to store significant quantities of data, usually owned by one company.


Data redundancy and RAID

Data redundancy is the practice of keeping your backed up files safe - effectively making backups of your backups. This can be achieved by using physical security measures such as fire and flood protection or a secondary power source, or utilizing techniques such as RAID to make mirrored copies of the contents of storage disks.

Used by almost every reputable online backup service, data redundancy measures ensure that, if one hard drive containing your data fails or is damaged, there’s at leastone more identical copy waiting in the wings.

For more information on common redundancy measures and RAID, head over to theSecurity section of our Ultimate Online Backup Guide.



DD-WRT is an open-source firmware designed to maximise the functionality of a standard WiFi router. With DD-WRT installed, your router can be used to remotely access devices in your network, setup a VPN server, manage your network traffic, and more. It can also turn your router into a NAS device, so long as it has a USB port and you've got a removable storage device to hand.

While DD-WRT is not compatible with all routers, there’s a handy list available on their website that details all of their supported devices.

 For more information on DD-WRT and how to use it to setup your router as a NAS, take a look at the excellent DD-WRT Guide created by our sister site


Data de-duplication is a method of compression that works by getting rid of duplicate copies of your data. It allows for more effective use of storage space and quicker file transfers, although backup providers offering this service must have knowledge of the contents of your files in order to complete the process.

Digital certificate

Digital certificates are official ‘documents’ for your computer, which confirm its identity. They contain the device name, a serial number, expiration dates, a copy of your public key (used for encrypting messages and digital signatures) and the digital signature of the certificate-issuing authority (CA) so that a recipient can verify that the certificate is real.


Disaster recovery

Often mentioned in the same breath as online backups, disaster recovery is primarily concerned with protecting and retrieving resources in the event of a major, destructive event such as fire, large-scale data loss, or hacking.

Disaster recovery plans (DRPs) are designed to enable the quickest possible recovery after an event, so are concerned as much with policies, procedures, and hardware as they are with data itself. Common measures include the use of offsite backups and alternative power sources, as well as a means of gaining immediate access to a secondary network connection, and more.


A DNS, or domain name server, translates domain names (such as into IP addresses. This is important because, although users rely on domain names to identify the websites they're trying to reach, the internet has to be given an IP address in order to send data or data requests to the right place.


Drive mapping

Drive mapping is the method of taking a local drive from elsewhere in your network and associating it with your own computer, so that you can access it as quickly and easily as if it was a hard drive directly attached to your PC. Mapped drives are assigned letters from A - Z in the same manner as your computer’s native hard drive, which is usually identified as the C: and/or D: drive.

The process is also often referred to as 'drive mounting'.


The process by which information is encoded, so that it can’t be understood by anyone who doesn’t possess the key. Achieved by using an encryption algorithm, or cipher, and an encryption key.

For information on more advanced security terms, and how to know which backup providers will keep your data safe, check out the Security section of our Ultimate Online Backup Guide.

End-to-end encryption

If a backup service protects your data with end-to-end encryption, it means that your files are always encrypted when they’re in the company’s hands. This is achieved by encrypting them before they ever leave your computer, and only decrypting them when they’re back on your PC. For an extra layer of security, the encryption key used can be generated based on a password that only you know - meaning that your provider couldn’t return them to plaintext form, even if they wanted to.


Fault tolerance

Literally, the ability to tolerate faults. If there is a problem or failure within a system, fault tolerance is the term given to its ability to continue to function correctly.


File server

Within a client-server network structure, a file server is a computer that provides centralized storage for a network’s files, as well as being the location from which that shared data can be managed and retrieved.

For example - if there are six devices within an office network, all of which are permitted to access the same set of resources, these resources will be stored on the network’s file server - meaning that all six employees can access them at any time without having to move them physically using a USB drive, or spend time transferring them by email.

File servers can also be adapted to meet specific requirements - for example, determining exactly which computers can access which information, requesting login details before granting access, and withdrawing permissions from users that no longer need to see certain data.

 File servers are mostly used by companies and organisations such as schools and universities, although they can also be part of a home setup if there is a need to share files from a central source. Standard computers can be made into file servers, or a dedicated device (such as Network Attached Storage) can also be used.

File versioning

File versioning is a feature offered by many backup providers that allows you to retrieve previous drafts of your work, even after they have been saved over.

For more information on versioning and exactly how it works, head over to Part 8 of our Ultimate Online Backup Guide - File versioning, deleted file recovery and archiving.



A firewall is a network security feature that controls what traffic passes in and out of your network. Working from a predetermined set of rules, firewalls create a barrier between your internal network and unsecure external networks, such as the internet, by blocking ports. Using a firewall ensures that, if an unauthorized source is trying to access your computer or local network, they will be automatically stopped.
Firewalls can be implemented at hardware level (usually integrated into your router) or software level (installed on individual devices), and their criteria for accepting or rejecting traffic can usually be customized to meet individual needs. They’re commonly used to prevent viruses, hackers and worms from reaching your PC.


Firmware is a software program integrated into a hardware device, which determines how it performs its functions and communicates with other devices. Firmware usually comes pre-installed on devices such as video cards, keyboards and hard drives, although it can be manually removed or rewritten. The instructions given by firmware are designed to be permanent, and are stored in read-only memory (ROM).



File Transfer Protocol (FTP) is a protocol commonly used to transfer files over the internet. It works with TCP/IP, and runs on client-server architecture. FTP operates in the same way as HTTP does while transferring web pages, and as SMTP does when sending emails - the client communicates a request to the server, to which the server responds by performing the requested task. FTP specifically is used to upload, download, copy, move, rename and delete files. It is also useful for searching through large databases or directories.

FTP can be directed via a command line interface, GUI, or web browser.


A gateway is the point at which data packets move from one network (such as their LAN) to another (such as the wider internet). A gateway can be an ISP, as is typical in home setups, or a dedicated computer (more common in larger organisations), and is used to implement protocol changes, a well as directing packets towards their specified location.


Handshake encryption

All SSL sessions start with an SSL handshake – a kind of exchange of messages which proves the server’s authenticity and then allows client and server to create symmetric keys to encrypt & decrypt data during the following session. If needed, it can also let the client identify themselves to the server.


Hash functions

Hashing is the process by which a string of characters (such as a password, message or passage of text) is transformed into a unique, fixed-length series of numbers. For example, hashing the word ‘tiger’ would turn it into eg. '7428' while ‘lion’ might become '7429'.

Hashing is a useful indexing tool that makes it quicker to find specific information in a database (as hashed codes are unique, while words may occur multiple times in one text), and is also used in encryption to obscure the original content of information being transferred online.

The algorithm used to generate the hashed text is called the hash function.


A homegroup is a group of computers within a local network that share files and devices such as printers and scanners. Homegroups can be password protected, and don’t have to include all members of the network.


Host server

The server that hosts a specific service – most commonly a website.


HTTP is the way that data is sent between your browser and the website that you’re connected to. Communication sent using a normal HTTP connection is sent in plaintext form, and can be immediately accessed if your connection is hacked. The extra ‘S' stands for Secure, and means that all communication between your browser and the website is encrypted.

HTTPS is most commonly used in confidential online transactions, such as shopping and online banking. HTTPS connections are indicated in many browsers by a padlock icon that appears in the address bar, as well as the 'https://' at the beginning of the site’s address.

HTTPS pages typically use asymmetric (public key) encryption, via SSL or TLS protocols.

IP address

An IP address is the unique series of numbers (eg. '') assigned to every device connected directly to the internet. IP addresses are usually generated by your ISP, and are used to identify the location that data is being sent to or from over the internet.

IP addresses are only assigned to devices with a direct connection to the internet - meaning that if a computer is plugged in with an ethernet cable, it will have its own IPaddress, while multiple computers all using the same WiFi network will share the same one. This is because the IP address is held by the router providing the connection - not the individual devices that connect to it wirelessly.

 There are two types of IP address - static and dynamic.

Static IP addresses

Static IP addresses are assigned the first time a device connects directly to the internet,and do not automatically change. Today, static IPs are rarely assigned to home users, although some ISPs may offer them as part of more expensive plans.


Dynamic IP addresses

The majority of IP addresses assigned to personal users today are dynamic, which means that when your connection is offline, your IP address is given to someone else - mainly to slow down the rate at which static addresses run out.

Although exactly when and how a dynamic IP address changes may vary between ISPs, it typically happens every time a direct connection with the internet is broken. So when you disconnect your laptop from a WiFi connection, your IP address won’t change, but when you turn off or reset your router, it will.


Internet Service Provider - the company that supplies your internet connection, such as Verizon or Comcast in the US, BT or Virgin in the UK.


Key management

How encryption keys are generated, distributed, stored, changed and destroyed. One of the most challenging aspects of encryption.



A LAN, or local area network, is the name given to any group of connected devices that reside within a small physical area - typically within the same building, floor or room.


Local storage

Local storage encompasses the programs and data that are kept on your computer’s hard drive, or on a removable drive that’s stored within your geographical location. Local storage devices are accessible without having to connect to the internet.



Mbps, or Megabits per second, is the unit most commonly used to measure data transfer speeds. One MB (megabyte) is equal to 8Mb.



Metadata is information that describes a given piece of data. For example, the metadata attached to a Word document might tell you who created it, when it was created, the size of the file and, in some cases, a summary of its contents.


MITM (Man-in-the-Middle) attacks

An attack that can break into data sent using end-to-end encryption. If an interceptor is able to impersonate the intended recipient (so that the data is encrypted using the interceptor’s own public key), they can decrypt and read the sent data.

Having done so, they can re-encrypt the data again using the correct public key and send it on to the intended recipient, so they are not alerted to the interception. This can be avoided by both parties repeating a verification code to each other before the data is sent, to ensure that neither is a man in the middle.


Network attached storage (NAS) is a way of creating shared storage space within a network by creating a space where all devices on the network can store and retrieve files. Companies such as QNAP and Synology specialise in creating pre-made NAS devices, or you can set up your own by attaching a removable hard drive to your WiFi router, or using reclaimed hardware or a Raspberry PI if you're more technologically inclined.

As well as allowing you to backup and restore files from all devices in the network, advanced NAS devices also offer data redundancy settings and additional features including remote file access and the ability to stream media files. For more information, head over to our in-depth NAS Guide.



A network is any group of computers or other electronic devices (phones, tablets, printers, scanners, servers etc) that are connected - either physically or via the internet.


Network architecture

Network architecture is the term used to describe how a network is structured - in particular, its physical components and how they communicate with one another. It also determines the roles of these components, as well as identifying features such as wireless access points, the protocols used for formatting and transferring data, and different access methods, among others.

There are two commonly used forms of network architecture that it’s particularly useful to know:

Client-server architecture

Networks that use client-server architecture designate all of their attached devices and processes the role of either clients or servers.

Clients are the computers and workstations that belong to a network, and which are used to run software applications.
Servers, on the other hand, are dedicated to managing processes on behalf of all network users - such as storing and managing files (file servers), or processing network traffic (network servers). They form the basis of many of the actions performed by clients - holding files for users to access, processing actions, and granting access to devices such as printers and modems.
Although large organisations typically use dedicated computers to act as servers, smaller operations can configure personal computers to act as, for example, file servers that store data for access by all members of a homegroup.

Peer-to-peer architecture

Peer-to-peer (or P2P) architecture is an alternative to the client-server model, in which every computer performs both client and server roles, and all nodes are given an equal share of responsibility for the processes taking place. So rather than having a central, powerful server that facilitates clients, all devices have equal power, and do equal amounts of the work.



A node is simply the tech term for any device or system that’s connected to a network. Common examples include computers, mobile devices, file servers and printers.


Online Backup / Cloud Storage

Although the terms 'online backup' and 'Cloud storage' might seem at first glance to beinterchangeable, when it comes to backups, they mean distinctly different things.

Online backups take the more traditional approach of the two - typically offering automatic uploads, unlimited storage space and strong security features. Cloud storage, on the other hand, focuses much more on quick and easy processes, file sharing, and remote access.
To find out which service would suit you best, and for some recommended backup providers, visit our Ultimate Online Backup Guide and check out Part 3 - What Type Of Backup Service Do You Need?

Open-source software

Open-source software has source code that can be freely accessed, modified, and used by anyone inside or outside of the company that made it. A favorite of security experts, open-source software allows external users to poke around and find out exactly how programs have been built, giving them a fighting chance of picking out anything dodgy or broken.

While open-source is an undeniably good idea with plenty of benefits, it does run into a few problems - primarily due to the fact that examining source code is extremely time-consuming and requires considerable expertise to do thoroughly. Despite this, however, there is a general view that a company’s willingness to make its software available to outside users is an indication of transparency and honesty about their product.

Open-source is the opposite of proprietary, or closed-source, software, which no-one else can modify or copy, and which is the standard method used by major programs such as MS Word and Photoshop.



A packet, also known as a data packet or network packet, is a formatted unit in which form data is transferred over the internet. Whenever information is sent from one computer to another online, it’s broken down into smaller chunks to increase the speed and efficiency of the transfer. Each of these chunks is called a packet and features both a data area, containing the information that you’re trying to send, and a header, which holds information on where the data’s coming from and where it’s going, as well as a unique identifying number.



In general usage, parity describes two things that are equal. When it comes to backups, it means much the same thing - but here, it's used specifically to determine whether data has been transferred with any errors. Effectively, assessing the parity of uploaded or downloaded data means finding out if it is the same after it's been transferred as it was before.



Unencrypted data.


Ports are the places where a device or program connects to its desired location on the internet.

When you send a data packet over the internet, it is sent to the IP address of the computer you’re trying to reach. Once there, it will connect to the device via a specific port designed to handle the type of connection you’re trying to make (whether accessing emails, viewing the comments page of a website, moving files using FTP etc).

There are 65,535 potential TCP ports, and the same again for UDP, with the port format used in any given transfer matching the protocol by which the packet was sent. Despite this enormous number of potential ports, however, relatively few are regularly used in day-to-day processes - the majority of which are numbered from 0 - 1023, these being the ports dedicated to internet processes.
When receiving information, ports are configured to 'listen' for particular kinds of connection (for example, identifying and then accepting communication from a particular web server application). Once this contact has been made, the port sends the information that has been requested back to the device that initiated the communication, and then terminates the connection between them.

Port forwarding

Firewalls block ports to ensure that only safe traffic makes its way to your computer, but sometimes blocked ports can restrict your ability to use certain features. Port forwarding is the process of unblocking ports that you want to have open access to, thereby allowing you to connect to sources that were previously denied.



A protocol is a method of transferring network packets between devices, based on a pre-agreed set of rules. These rules determine what information goes into a packet’s header, and therefore how the packets themselves are formatted. Different protocols are used for different purposes, but all determine how a device tells when data has been sent or received, amongst other functions. When it comes to sending information over the internet or other computer networks, the most widely used protocols are IP, TCP and UDP.



IP, or Internet Protocol, is a kind of base-level connection protocol that underlies all internet data transfers, and which TCP and UDP are built upon. IP is an essential foundation for sending information online as all devices continue to be identified across both the web and local networks by their IP address.


IPv4 and IPv6

If you’ve seen the terms IPv4 and IPv6 used around the internet, you might be wondering how, if at all, they relate to the standard Internet Protocol. Luckily, it’s pretty simple.

IPv4 - which stands, unsurprisingly, for Internet Protocol version 4 - is the standard that’s been used to generate IP addresses pretty much since the internet began (versions 0 to 3 were experimental only). It supports IP addresses of up to 32-bit, which allows for approximately 4.29 billion possible combinations - making it pretty impressive that, in recent years, the supply of IPv4-generated addresses has run dry.
While some measures have been taken to extend IPv4’s usability, the best long-term solution appears to be the implementation of IPv6 - a new standard that allows the creation of 128-bit addresses - giving us a staggering 2^128 addresses to assign. It’s hard to put into words exactly how many this is, so here’s a nice explanation from computer scientist and former EDN Network columnist Steve Liebson:
"So we could assign an IPV6 address to EVERY ATOM ON THE SURFACE OF THE EARTH, and still have enough addresses left to do another 100+ earths. It isn’t remotely likely that we’ll run out of IPV6 addresses at any time in the future.” Steve Liebson
We’re inclined to agree. However, IPv6 can’t be implemented overnight, so while we wait for it to roll out across the entirety of the net, some temporary fixes, such as using dynamic IP addresses, have been put in place to extend the use of IPv4 addresses.


TCP (Transmission Control Protocol) is the most reliable extension of the IP protocol and, as a result, is also the most widely used. This reliability is due to the fact that TCP connects directly to the computer that your network packets are being sent to, only terminating the connection when it’s certain that all of the information has arrived safely. If it doesn’t receive confirmation from the other device, it will attempt to resend the information. This process is often referred to as error correction.

Using TCP means that you’ve got a very good chance that all of your packets will arrive intact, although it’s a fairly process-intensive way of doing things, and can be a little slow. You may also see it referred to as TCP/IP - a name typically used when referring to it as the standard protocol for internet-based communications.


UDP (User Datagram Protocol) is a considerably faster process than TCP, although it’s less reliable as packets being transferred aren’t subject to any error correction at all. Rather than connecting to the computer you’re trying to reach, UDP simply bundles your information into packets and sends them out into the internet, relying on the computer at the receiving end and the devices between you to deliver them to where they need to be. Because of this, UDP doesn’t involve any acknowledgement of whether your information has arrived or not, and won’t attempt to resend anything.

As a result, UDP is a slightly more hit-and-miss approach, although it does have the advantage of being much less process-heavy than TCP. It is a stateless protocol, while TCP is stateful.

Proxy servers

Often simply referred to as proxies, proxy servers are computers that act as intermediaries between your computer and the internet. All data and requests for information that come from a network have to pass through a proxy before they reach the internet - with proxies contributing to security (as part of a firewall setup), as well as performance improvement and filtering requests.

 Performance improvement mainly comes in the form of saving information about regularly used sites to the proxy's cache, as well as logging user interactions with the proxy for later use in troubleshooting.

Different proxies can be used to achieve specific goals - one of the most commonly used being an anonymous proxy, which allows users to use the internet without revealing their IP address.

Random salt

Random salt is the name given to extra characters that are added to a hashed password in order to make it more difficult to decrypt. Salting is the process by which they are added.


Remote file server

A remote file server (sometimes also referred to as a remote access server) is a server that allows people to access files on a network that they are not a part of - for example, accessing files on an office LAN from home. Users connecting to a remote file server are typically subject to a process of authentication before being granted access to the network files.



A router is a device that forwards data packets along to other networks. Located at gateways, routers are connected to multiple networks, and are responsible for moving packets from one (such as a LAN) to the next (eg. the internet). Routers also decide which path packets should take to their destination, and determine their formatting based on the information in their headers.

While the term ‘router’ is commonly used to refer to devices that provide a wireless internet connection, routers can also move packets between multiple LANs, or between a LAN and a company WAN

Session Key

Session keys are short-term cryptographic keys, typically based on a shared secret between client and server. A session key's lifespan is determined by the duration of the session it’s used for, though it should always be strong enough to withstand cryptanalysis for the entire length of the session.



SSL and TLS are cryptographic protocols used for sending secure communications across a network. SSL, which stands for Secure Socket Layer, encrypts the data that passes between your browser and the website you’re visiting.

TLS (Transport Layer Security) was developed as a successor to SSL, and incorporates additional features that ensure that third parties cannot read or change messages.

When you see ‘https://’ at the beginning of a web address, the ‘s’ indicates that a secure cryptographic protocol (either SSL or TLS) is being used.

Symmetric-key (private-key) encryption

Symmetric-key encryption is the opposite of asymmetric-key encryption, and involves the use of a decryption key that is identical to the encryption key, or that can be easily deduced from inspecting it.

As a result, both keys have to be kept private to avoid anyone being able to access the plaintext version of your data. While symmetric-key encryption is quicker than its asymmetric counterpart, the keys are also much harder to distribute, as they can only be sent via trusted networks and devices. AES is the most commonly used symmetric algorithm.

The Cloud

Basically the internet. A network of servers that handle all internet-based processes, from google searches to streaming music and online storage.


Time Machine & Time Capsule

Time machine is the backup software that comes integrated into Mac computers from OS X Leopard onward. It allows you to run constantly updated, automatic backups of all of your data - which is then saved to an external hard drive.

Time Capsule is the Apple-made external drive designed to work alongside Time Machine, although other brands should work just fine too. Recent versions, such as the AirPort Time Machine, are also superfast routers, offering quick and easy WiFi as well as storage space.


Two-factor authentication

Two-factor authentication is a security measure that requires users to identify themselves using two different forms of verification before accessing an account or resource. Methods of verification can be remembered (such as entering a personal pin code) or physical (such as proving ownership of an ID or bank card). Online, it’s often a username & password, as well as a code delivered by text or phone call.



Virtual private networks (VPNs) act as a middle step between your computer and a host server, creating a tunnel that encrypts your information so that ISPs and governments don’t know who you are when you send or receive information online. As your identity is obscured once you've connected to a VPN server, the only thing that observers will be able to find out is that you’ve connected to the server in the first place - after that, everything you do is private.



A WAN, or Wide Area Network, is a network that spans a large geographical area. The internet as a whole is an example of a WAN, although the term can also apply to organisations that relay information between offices in multiple locations.


Zero knowledge backups

The name given to an approach to data security taken by several leading online backup providers. Zero knowledge means that the company you’re storing your files with will never be able to access your data in plaintext form or view or hold your encryption key. For the full details, and information on which backup services take a zero knowledge approach, take a look at our detailed guide.