Unit tests are meant to verify the functionality of isolated units of code. When dealing with code whose output depends on the system or system configuration, what are approaches to write effective unit tests? I feel this problem plagues lower level systems languages more so I am asking it here.
I solve this by writing "unit tests" that I then manually compare to the output of my terminal's utilities. It is the quickest way to verify units work as expected but it is obviously not automated.
Making a container or a VM to run integration tests seems like the next easiest way, not sure if there are other cost effective ways.
Scenario
Say I have a function called
get_ip_by_ifname(const char *if_name, struct in_addr *ipaddr)
Inputs:
- string of interface name
- pointer to variable where the returned IP address will be
Returns:
- -1 if interface does not exist,
- 0 if interface exists but has no IPv4 IP
- 1+ if interface exists and has at least 1 ip addr (some interfaces have multiple addresses, only 1st is written to ipaddr buffer)
Test Cases and their dependencies
- Interface doesn't exist
- easy to test, use uncommon interface name
- Interface exists has no ipv4 ip address
- requires the underlying system to have a unique interface name which I need to hard code and compare to in my unit test
- interface exists, has 1 ipv4 ip address
- requires underlying system to have the uniquely named interface with exactly 1 uniquely defined ip address. Both of which I need to hard code into my test
- interface exists, has 1+ ipv4 ip addresses
- similar to item 3.
The way I might test something like this works is write a test that logs each case's output to the terminal than run ip -c a
in another terminal and compare the info in the 2 outputs. I verify it works as expected manually with very minimal setup (just assigned multiple IP addresses to one of my interfaces).
I would like to test this in an automated fashion. Is there any way that wont be a time sink?