Master your next Linux interview with our comprehensive collection of questions and expert-crafted answers. Get prepared with real scenarios that top companies ask.
Prepare for your Linux interview with proven strategies, practice questions, and personalized feedback from industry experts who've been in your shoes.
Thousands of mentors available
Flexible program structures
Free trial
Personal chats
1-on-1 calls
97% satisfaction rate
Choose your preferred way to study these interview questions
systemd manages the system with units, and a service is just one unit type. A unit file is the config that tells systemd what to start, how to start it, what it depends on, and when it should run. For a service, that lives in a .service file with sections like [Unit], [Service], and [Install].
Targets are grouping units, kind of like runlevels but more flexible. A target does not usually run a process itself, it pulls in other units. For example, multi-user.target brings up core system services, and graphical.target adds the GUI stack. Services and targets connect through dependencies like WantedBy=multi-user.target in the service file, which makes systemctl enable create the right symlink so that target starts the service automatically.
They work together to decide who can access a file, and what the default access looks like when it is created.
ls -l, changed with chown and chgrp.r, w, x bits on files and directories.666, and new directories from 777; then umask subtracts permissions, it does not add them.umask 022 gives files 644 and directories 755; umask 027 gives files 640 and directories 750.One subtlety, execute is usually not given by default to regular files, even if the math suggests it.
They’re special permission bits that change how execution or deletion works beyond normal rwx permissions.
setuid on an executable makes it run with the file owner’s effective UID, not the user launching it. Example, passwd runs as root so it can update /etc/shadow.setgid on an executable makes it run with the file’s group ID. On a directory, new files inherit the directory’s group, which is useful for shared team folders.sticky bit on a directory means users can only delete or rename files they own, or that root owns, even if the directory is world-writable. Classic example, /tmp.You’ll see them as s or t in ls -l, like rws, rwxr-s, or drwxrwxrwt. Numeric examples are 4755, 2755, and 1777.
Try your first call for free with every mentor you're meeting. Cancel anytime, no questions asked.
SELinux and AppArmor are Linux Mandatory Access Control systems. They add a layer beyond normal Unix permissions by restricting what a process can read, write, execute, or connect to, even if it gets compromised. SELinux is label based and more common on RHEL, CentOS, Rocky. AppArmor is path based and often seen on Ubuntu, Debian.
In practice, I have worked more with SELinux:
- I usually start with getenforce, sestatus, and audit logs in /var/log/audit/audit.log.
- When an app breaks, I check denials with ausearch -m AVC or sealert -a.
- For quick validation, I may switch to permissive mode temporarily, not as a fix.
- I fix issues by setting the right context, like semanage fcontext plus restorecon, or enabling a needed boolean.
- Example, I allowed Nginx to connect to a backend by enabling httpd_can_network_connect.
su switches you to another user, usually root, and gives you that user’s shell. You authenticate with the target user’s password. sudo runs a single command as another user, usually root, using your own password and policy rules.
su is broader, it changes identity for a session; sudo is more controlled and auditable.sudo logs command usage, supports per-user and per-group permissions, and follows least privilege better.su - gives a full login shell with target user environment; plain su keeps more of the current environment.visudo, because it does syntax checking and prevents concurrent edits./etc/sudoers.d/ over modifying /etc/sudoers directly, use groups, and grant only required commands.visudo -c, test in a second session, and avoid broad NOPASSWD: ALL unless tightly justified.I’d start broad, then narrow from system level to process, then to thread, syscall, or I/O behavior.
uptime, top, mpstat, and check load average versus CPU percent.top -H, ps -eo pid,ppid,cmd,%cpu --sort=-%cpu, and note if it is user or kernel heavy.top -H -p <pid>, ps -Lp <pid> -o pid,tid,%cpu,psr,comm.%us, %sy, %wa in vmstat 1 or iostat.strace -p <pid> for syscall loops, or perf top and perf record for hotspots.renice, taskset, restart service, or scale out after finding the cause.I’d keep it simple: create a unit file, validate it, enable it, then use systemctl and journalctl to debug.
/etc/systemd/system/myapp.service with [Unit], [Service], and [Install] sections.[Service], define ExecStart=/path/to/app, set User=appuser, WorkingDirectory=..., and usually Restart=on-failure.systemctl daemon-reload, start it using systemctl start myapp, then enable at boot with systemctl enable myapp.systemctl status myapp, and logs with journalctl -u myapp -xe or -f for live output.ExecStart, missing permissions, bad environment variables, wrong service type like Type=simple vs forking, or SELinux blocking execution.If it will not start, I also run systemd-analyze verify /etc/systemd/system/myapp.service to catch unit syntax problems fast.
I’d work it layer by layer, network first, then SSH, then system health, so I do not waste time debugging the wrong thing.
ping, traceroute, test port 22 with nc -zv host 22 from another machine.ip a, ip r, NIC status, firewall rules in iptables or nft, security groups, and recent route or DNS changes.systemctl status sshd, ss -tlnp | grep :22, review /var/log/auth.log or /var/log/secure.top, free, df -h, dmesg.sshd_config, then restart sshd carefully.Get personalized mentor recommendations based on your goals and experience level
Start matchingI start by confirming whether it is true RAM pressure, swap thrashing, or a leak, then narrow it to the process, workload, or kernel behavior.
free -h, vmstat 1, sar -r, and /proc/meminfo.dmesg -T or journalctl -k, they often name the killed process.top, htop, ps aux --sort=-%mem, and compare RSS vs VIRT.smem, pmap, app metrics, or heap profilers.systemctl status, docker stats, or /sys/fs/cgroup.Then I correlate with recent deploys, traffic spikes, or config changes, and either tune limits, fix the leaking service, or add memory.
They sit at different layers of a Linux system.
systemd and sysvinit.bash or zsh. It reads commands, launches programs, and supports scripting.A simple way to explain it in an interview is: kernel runs the machine, init brings the system up, shell lets the user interact with it. The shell depends on the kernel, and usually starts after init has set up userspace.
I start with systemd, then the journal, then the app logs. That gives me the fastest path to whether it is a unit issue, a dependency problem, or the service itself crashing.
systemctl status <service>, shows state, exit code, recent errors, and dependency failures.journalctl -u <service> -b, gets service logs from the current boot, often enough to spot config or permission errors./var/log, or wherever the service writes, like Nginx, MySQL, or custom app logs.systemctl cat <service> and systemctl show <service>, verify unit file, overrides, environment, restart policy, and exec path.ss -lntp, ps aux, top, df -h, free -m, and dmesg, rule out port conflicts, dead processes, resource exhaustion, or kernel issues./etc/fstab is the static filesystem table. It tells Linux which filesystems, swap spaces, and mount points to use, plus options like defaults, noatime, or nofail. At boot, systemd or the init process reads it to mount disks automatically. It is also used by mount -a, so it is the central place for persistent mounts.
If it is misconfigured, a few bad things can happen:
- Wrong device or UUID, the system may fail to mount a needed filesystem.
- Bad mount point, services may break because expected paths are empty.
- Incorrect options, you can lose write access, get permission issues, or hurt performance.
- Broken root or /boot entry, the machine may drop into emergency mode or fail to boot.
- Bad swap entry, memory pressure handling gets worse.
Best practice is to use UUIDs, test with mount -a, and keep recovery access handy.
I check both capacity and inode usage, because a filesystem can look "full" for either reason.
df -h to see disk space by filesystem, size, used, available, mount point.df -i to see inode usage, if IUse% is 100%, you have inode exhaustion.du -sh /* 2>/dev/null | sort -h or drill into the busy mount.find /path -xdev | wc -l, then inspect directories with lots of small files like logs, cache, mail spools.If apps report "No space left on device" but df -h looks okay, I immediately check df -i, because that error happens with inode exhaustion too.
A hard link is another directory entry pointing to the same inode as the original file. A symbolic link is a separate file that stores a pathname to another file or directory.
I use hard links when I want two filenames to behave like the exact same file, often for space efficiency. I use symlinks when I want a flexible pointer, like current -> /opt/app/v2, shared config paths, or shortcuts across filesystems.
I’d answer this with a quick STAR flow: situation, tradeoff, actions, result. Pick an example where business pressure was real, but you still showed good engineering judgment.
At a previous job, we had to roll out urgent OpenSSL updates across production Linux web servers after a high severity advisory. Speed mattered because of exposure, stability mattered because these boxes handled customer traffic, and security mattered because delaying patching increased risk. I split the rollout into phases, patched and reboot tested a staging group first, then a small production canary behind the load balancer. I used config management to keep changes consistent, verified service health with monitoring and smoke tests, and scheduled the wider rollout during a low traffic window. We finished the same day with no customer impact, and I documented the runbook so the next emergency patch cycle was faster.
I’ve worked most with Ubuntu and Debian, RHEL/CentOS and Rocky, and a bit of SUSE and Alpine.
apt with .deb packages, very dependency-friendly, huge repos, and common in cloud and app hosting.yum or dnf with .rpm packages, stronger enterprise tooling, predictable lifecycles, and tighter vendor support..rpm, but package management is usually through zypper, which is solid for patching and repo handling.apk, very lightweight, musl-based, and popular for containers where small image size matters.systemd, so systemctl is the standard.systemd unit files and targets.At a high level, firmware initializes the machine, hands off to a bootloader, the kernel starts hardware and mounts a root filesystem, then init brings up user space until you get a login prompt.
initramfs into memory.initramfs.initramfs loads needed modules, finds the real root filesystem, then does switch_root or pivot_root.systemd, which mounts filesystems, starts services, networking, logging, and targets.systemd starts a getty on a TTY, or a display manager for GUI, and that gives you the login prompt.I’ve administered Linux in production across web, API, and batch workloads, mostly on Ubuntu, RHEL, and Amazon Linux. My focus has been reliability, security, and making systems easy to operate at scale.
systemd services, users, SSH hardening, sudo, firewalls, SELinux basics, and backup/restore processes.top, vmstat, iostat, sar, journalctl, and logs in /var/log.A solid example, I helped stabilize a high traffic API cluster by tuning file descriptor limits, fixing log rotation, and identifying disk latency during peak load. That cut recurring incidents and improved recovery time.
I’d widen the check beyond basic rwx bits, because “Permission denied” often comes from something else in the access path.
id, and test the path with namei -l /path/file.getfacl, they can override what ls -l suggests.lsattr.getenforce, ls -Z, and audit logs often expose denials.root_squash, noexec, or UID mapping issues.strace on the failing command to see the exact syscall and object being denied.In interviews, I’d say I follow layers, identity, path traversal, extended controls, then security modules and mounts.
Linux creates a process mainly with fork(), which copies the parent process, then often exec() replaces that copy with a new program. Modern systems may also use clone() for finer control, especially for threads and containers. Every process gets a PID and starts with resources like memory mappings, file descriptors, and environment variables inherited from the parent.
Management is handled by the kernel scheduler and process states. A process can be running, runnable, sleeping, stopped, or zombie. The kernel schedules CPU time based on priority and policy, like normal CFS or real-time classes. Parents can monitor children with wait() and receive signals such as SIGCHLD. Admins manage processes with tools like ps, top, kill, nice, and systemd, which also tracks services using cgroups for resource limits and isolation.
They measure different things. CPU utilization is the percentage of time the CPU is busy doing work. Load average is the average number of tasks that are either running on CPU or waiting in uninterruptible sleep, usually I/O wait, over 1, 5, and 15 minutes.
4 means roughly full saturation, 8 means sustained queueing.So, utilization is a percentage, load is a queue depth style signal.
I’d work top-down: confirm the symptom, find the busy device, then decide whether it’s throughput, latency, queueing, or filesystem related.
iostat -xz 1, check %util, await, svctm, r/s, w/s, avgqu-sz. High await and queue depth usually signal contention.iotop or pidstat -d 1 to find which processes are generating I/O.vmstat 1, high wa plus swapping can look like a disk issue.sar -d, lsblk, and /proc/diskstats, confirm whether one disk, LVM layer, or RAID device is the hotspot.df -h, df -i, mount, and dmesg for ext4/xfs errors or controller resets.blktrace, fio, or smartctl to separate workload limits from failing hardware.I’d answer it as a layered process: confirm what sits under the filesystem, expand the block device, then grow the filesystem online if supported.
lsblk, df -hT, and pvs/vgs/lvs if LVM is involved.growpart or parted; if using LVM, run pvresize, then extend the LV with lvextend.xfs_growfs for XFS, resize2fs for ext4, usually mounted and live.df -h, lsblk, and check logs for errors.Minimal downtime comes from using online resize paths. I’d still take a snapshot or backup first, confirm filesystem type, and have a rollback plan if the storage layer does not rescan cleanly.
In Linux, virtualization and containerization solve isolation differently.
A simple way to say it in an interview is, a VM is like a full house, a container is like an apartment in the same building.
The standard way is sudo, not sharing the root password. You give the user only the commands they need, ideally via a group or a small rule in /etc/sudoers, and you edit it with visudo so syntax errors do not lock you out.
wheel or sudo, if broad admin rights are acceptable.systemctl restart nginx or journalctl.sudo -l -U username to verify exactly what they can run.If I wanted tighter control, I’d prefer a dedicated group plus a narrow sudoers rule over full root access.
Here is the clean way to explain it in an interview:
systemd, that provides a service like sshd or cron.wait().The key distinction is ownership and lifecycle. Processes are isolated, threads are shared within a process, daemons are service-style background processes, and zombies are dead processes waiting to be reaped.
I usually start by isolating the layer where it breaks, local stack, routing, DNS, or remote reachability.
ip addr, ip link, ethtool, check interface state, IP, duplex, errors.ip route, ss -tulpn, verify routing, default gateway, listening sockets, active connections.ping, traceroute or tracepath, test reachability and where packets stop.dig, nslookup, resolvectl, separate DNS problems from raw network problems.curl -v, nc, telnet, test specific ports, TLS handshakes, HTTP behavior.tcpdump and sometimes wireshark, inspect packets, retransmits, resets, ARP issues.journalctl, dmesg, NetworkManager logs, catch driver, DHCP, or link flap errors.iptables or nft, firewall-cmd, verify firewall or NAT isn’t blocking traffic.If it’s intermittent, I’ll also use mtr for ongoing path quality and latency loss patterns.
I’d work from scope, source, and timing, then confirm whether it’s a user issue, a service issue, or active abuse.
/var/log/auth.log on Debian/Ubuntu, /var/log/secure on RHEL, or journalctl -u sshd -u sudo -u sssd.passwd -S, chage -l, faillock --user, and look for expired, locked, or disabled accounts./etc/pam.d/*, sshd_config, nsswitch.conf, SSSD, LDAP, Kerberos.fail2ban, firewall logs, IDS, and check if the source is internal automation using stale credentials.Signals are an async way for the kernel or another process to notify a process that something happened. Each signal has a number and default action, like terminate, stop, continue, or ignore. A process can catch or ignore many signals with handlers, but some, like SIGKILL and SIGSTOP, cannot be caught, blocked, or ignored.
SIGTERM is the polite shutdown signal, use it first.SIGKILL is the force option, the kernel kills the process immediately.SIGKILL only if the process is hung, ignoring SIGTERM, or stuck in a bad state.kill -TERM pid, wait, then kill -KILL pid if needed.In interviews, I’d mention that systemd, containers, and orchestration tools usually send SIGTERM first for graceful shutdown.
I’d work from the client outward: confirm the symptom, check local resolver config, then test the DNS path step by step.
ping 8.8.8.8 vs ping google.com./etc/resolv.conf, search domains, nameserver entries, and whether NetworkManager or systemd-resolved manages it.dig or nslookup, for example dig google.com and dig @8.8.8.8 google.com./etc/nsswitch.conf, plus cache or resolver status with resolvectl status or systemctl status systemd-resolved.ss, tcpdump, or nc.journalctl, compare multiple DNS servers, and rule out stale cache or split-DNS/VPN issues.TCP is connection-oriented, UDP is connectionless. TCP does a handshake, tracks sequence numbers, retransmits lost packets, and guarantees ordered delivery. UDP just sends datagrams with no delivery guarantee, no ordering, and much less overhead.
That changes troubleshooting a lot in Linux:
- For TCP, check if the port is listening and whether the handshake completes, using ss -lntp, telnet, nc, or tcpdump.
- Common TCP symptoms are connection refused, timeouts, resets, backlog issues, or firewall blocks.
- For UDP, a port can look "open" but you still may not get replies, because silence is normal.
- UDP troubleshooting relies more on packet capture, app logs, counters, and checking both directions with tcpdump -ni any udp port <port>.
- Examples: web, SSH, databases are usually TCP; DNS, syslog, SNMP, and many streaming/VoIP workloads often use UDP.
Routing on a Linux host is the kernel deciding where to send packets based on the routing table. It looks at the destination IP, picks the most specific match, then forwards traffic either directly to a local subnet or to a next-hop gateway through an interface. If nothing more specific exists, it uses the default route. Policy routing can add extra logic with multiple tables and rules.
ip addr, ip route, and ip rule to see interfaces, routes, and policy rules.ip route get <destination>.ip neigh, and confirm packets with tcpdump -i <iface>.If fixing it, I’d add or replace the route with ip route add or ip route replace, test, then make it persistent in the network config.
I’d answer it in two steps, identify the process, then validate whether it belongs there.
ss -ltnp for TCP or ss -lunp for UDP, then filter with grep :PORT.lsof -i :PORT or netstat -tulpn on older systems.ps -fp PID and systemctl status <service>.127.0.0.1 may be fine internally, 0.0.0.0 is broader exposure.If I’m unsure whether it should be listening, I ask: what service owns it, is it required in this environment, is the exposure intentional, and does it match our hardening baseline. If not, I’d stop or disable it, then verify impact first.
I think about hardening in layers, starting with reducing attack surface, then tightening access, then improving detection and recovery.
sudo.nftables or firewalld, restrict management access, segment sensitive services.noexec where appropriate, enable SELinux or AppArmor.auditd, monitor file integrity and suspicious auth events.I’d answer this as a mix of policy mapping, automated validation, and evidence collection.
OpenSCAP, Lynis, or vendor tools to scan the host against a profile and generate reports.I start broad, then narrow fast so I do not lose volatile evidence.
ip a, ip route, ss -tupna, and recent logs.tcpdump, for example tcpdump -i eth0 -nn -s0 -C 100 -W 10 -w /tmp/incident.pcap, which rotates files and avoids DNS lookups.host 10.0.0.5, port 443, or net 192.168.1.0/24, to reduce noise.tcpdump -r incident.pcap, tshark, or Wireshark on a copy, looking for unusual egress, beaconing, scans, resets, and odd DNS.ss, lsof -i, conntrack -L, firewall counters, and timestamps from syslog or EDR.If the box is sensitive, I prefer writing captures to a separate disk and hashing the pcap for chain of custody.
I’d work top down and time-correlate symptoms first, because “intermittent” usually means I need evidence during the bad window.
journalctl, deploys, cron jobs, backups, or traffic spikes.uptime, top, vmstat 1, iostat -xz 1, sar, looking at CPU steal, run queue, memory pressure, swap, and disk latency.free -m, sar -B, dmesg for OOM, page reclaim, or THP and NUMA-related noise.pidstat -durh 1 -p <pid>, strace -p, perf top, and open files or socket pressure with lsof, ss -tpn.ss, sar -n DEV, tcpdump, dig.If I need a concrete example, I once traced periodic slowness to iowait spikes from logrotate plus compression on the same volume as the app.
I have hands-on experience with all three, mostly on Linux servers in production and lab environments. My strongest background is with iptables and firewalld, and I have also worked with nftables during migrations on newer distros like RHEL 8 and Debian 11.
iptables, I have built host-based rules for SSH, web traffic, NAT, port forwarding, and basic rate limiting.firewalld, I usually manage zones, services, rich rules, and permanent runtime changes on RHEL-based systems.nftables, I have created simple rulesets and translated older iptables logic into tables, chains, and sets.I’d answer this with a quick STAR flow: task, what the script did, how I made it safe, and the result.
At a previous job, I wrote a Bash script to automate log cleanup and disk space alerts across several Linux app servers. We had teams manually checking df -h, deleting old rotated logs, and restarting one service if a filesystem filled up. The script checked usage thresholds, archived and removed logs older than a set number of days, compressed large files, and sent an email and Slack alert with the server name and top directories from du -sh. I added safeguards like set -e, a dry-run flag, logging, and excluded critical paths. Then I scheduled it with cron. It cut manual cleanup work from a few hours a week to almost nothing and helped prevent repeat disk-full incidents.
I treat shell scripts like production code: fail early, validate inputs, and make behavior obvious.
#!/usr/bin/env bash, then set -Eeuo pipefail to catch common failures.$(...), and use arrays to avoid word-splitting bugs.usage(), and check dependencies with command -v.main function for flow.trap for cleanup and useful error context, like line number and command.set -x selectively for debugging.shellcheck, format with shfmt, and test happy path plus failure cases.For maintainability, keep scripts idempotent, avoid hardcoded paths, use constants like readonly, and document assumptions near the code that depends on them.
In Linux, processes get three default data streams: stdin for input, stdout for normal output, and stderr for errors. Keeping output and errors separate matters in admin work, because you can log clean results while still seeing failures. By default, input comes from the keyboard, and output and errors go to the terminal.
> to a file, >> to append, < for input.2> redirects only errors, which is useful for troubleshooting or clean scripting.|, send stdout from one command into another, like ps aux | grep nginx.Both schedule recurring jobs, but systemd timers are more integrated and easier to manage on modern Linux.
cron is simple, lightweight, and portable. Good for basic time-based jobs like 0 2 * * *.systemd timers pair with service units, so you get logging in journald, dependencies, resource controls, retries, and better visibility with systemctl.systemd supports Persistent=true, which runs missed jobs after downtime. Cron usually skips them unless you use anacron.I’d choose cron for quick, universal scripts or older systems. I’d choose systemd timers on modern distros when I want observability, service management, boot-aware scheduling, or tighter control over execution.
For large logs, I start broad, then narrow fast. The goal is to reduce volume, anchor on time, and correlate by IDs like request ID, PID, user, or IP.
grep -i, zgrep, or rg for fast keyword searches, and add -n for line numbers.awk, sed, or journal fields like journalctl --since --until.grep "ERROR" app.log | grep "request_id=123" to cut noise quickly.sort | uniq -c | sort -nr to spot spikes or repeated failures.tail -f or journalctl -f, then reproduce the issue if possible.If logs are structured, I prefer jq for JSON. In practice, I usually find one known bad event, grab its timestamp and correlation ID, then trace backward and forward a few minutes to identify the root cause.
I usually answer this by covering policy, tooling, validation, and recovery time.
rsync, tar, borg, and snapshot-based backups with LVM or storage arrays, depending on RPO and retention needs./etc, app data, databases, and boot-critical items like /boot and EFI.mysqldump, xtrabackup, or pg_dump, not just filesystem copies.cron or systemd timers, encrypt backups, ship copies offsite, and follow a 3-2-1 strategy.Example, I set up Borg with daily incremental backups, weekly prune policies, and quarterly restore drills for a web stack, which cut recovery from hours to under 30 minutes.
I’ve worked with containers mainly through Docker, containerd, and Kubernetes on Linux, plus some lower-level troubleshooting with runc and Podman. In practice, that meant building images, tuning resource limits, debugging networking and filesystem issues, and investigating why a container could see or not see certain processes, mounts, or devices.
runc, sets up those kernel features; Docker or Kubernetes mostly orchestrate around them.I’ve worked with both local log rotation and centralized logging in production Linux environments, mostly using logrotate, rsyslog, journald, and ELK or Graylog stacks.
/etc/logrotate.d/ policies with size and time based rotation, compression, retention, and copytruncate or service reloads depending on the app.rsyslog or Filebeat into Elasticsearch, and used structured logs when possible to improve searchability.I’d answer this with a tight STAR format: situation, actions, outcome, and what I changed to prevent it happening again.
At a previous job, a production Linux VM stopped accepting app traffic after a firewall change. I confirmed the app was healthy locally with ss, curl localhost, and systemctl status, so I narrowed it to networking. I used the cloud serial console because SSH was blocked, reviewed nft rules, and found a bad default drop policy applied before the allow rules loaded. I rolled back the ruleset, restored access, and validated from both the load balancer and host. Afterward, I added a staged firewall deployment, automated rule validation, and out-of-band access checks. That turned a high pressure outage into a 15 minute recovery with a cleaner process afterward.
I treat patching as a risk management and change management exercise, not just package updates.
I usually group them by saturation, errors, and user impact. The goal is to catch bottlenecks early and tie low level signals to application symptoms.
%user, %system, %iowait, run queue, load average, to spot contention vs blocked work.%util, to find storage bottlenecks, especially under write bursts.Tools I’d reach for are top, vmstat, iostat, sar, ss, and Prometheus plus Grafana.
LVM, Logical Volume Manager, sits between disks and filesystems and gives you flexible storage. Instead of carving one disk into fixed partitions, you create physical volumes, group them into a volume group, then carve out logical volumes that can be resized or moved more easily.
lvremove can be destructive across a larger storage pool.Knowing the questions is just the start. Work with experienced professionals who can help you perfect your answers, improve your presentation, and boost your confidence.
Comprehensive support to help you succeed at every stage of your interview journey
We've already delivered 1-on-1 mentorship to thousands of students, professionals, managers and executives. Even better, they've left an average rating of 4.9 out of 5 for our mentors.
Find Linux Interview Coaches