Monthly Archives: December 2014

Adding Networking Support to ELK: Part 2

In a previous post I described how I started to add LwIP networking support to ELK, along with some of the design decisions that guide ELK development. In this post, I’ll describe how socket related system calls are added to ELK and how they interface with the LwIP network stack and the rest of the ELK modules.

A minimal source file that allows LwIP to be brought in to ELK at link time looks like this:

#include "config.h"
#include "kernel.h"
#include "lwip/tcpip.h"

// Make LwIP networking a select-able feature.
FEATURE_CLASS(lwip_network, network)

// Define socket related system calls.
ELK_CONSTRUCTOR()
{
}

// Start up LwIP.
C_CONSTRUCTOR()
{
  tcpip_init(0, 0);
}

Almost all symbols in an ELK select-able module are static. The only external linkage symbols in this module are defined by the FEATURE_CLASS() macro. In this case, the symbols __elk_lwip_network and __elk_feature_network are defined. The first symbol is used to pull this module in at link time. The second symbol is used to cause an error if another (currently non-existent) network module is linked in at the same time.

ELK used two phases of constructor functions to preform system initialization. ELK_CONSTRUCTOR() functions are called at system start up before the C library is initialized. These functions are typically used to initialize system call definitions. C_CONSTRUCTOR() functions are normal constructors called after the C library has been initialized but before main() is called. No system calls have been defined in this example, but the LwIP initialization function is called in the C_CONSTRUCTOR() phase.

There are several system calls that are specific to sockets and networking. I’ll create stub functions for all of them now. Even though I’m concentrating on getting ELK running on an ARM target right now, I always build ELK for all targets. My first attempt to create a stub handler for accept4() failed to compile for the i386:

#include <sys/socket.h>

#include "config.h"
#include "kernel.h"
#include "syscalls.h"
#include "lwip/tcpip.h"
#include "crt1.h"

// Make LwIP networking a select-able feature.
FEATURE_CLASS(lwip_network, network)

static int sys_accept4(int sockfd, struct sockaddr *addr, socklen_t *addrlen)
{
  return -ENOSYS;
}

// Define socket related system calls.
ELK_CONSTRUCTOR()
{
  SYSCALL(accept4);
}

// Start up LwIP.
C_CONSTRUCTOR()
{
  tcpip_init(0, 0);
}

It turns out that the i386 socket calls (and perhaps other targets) all go through one system call called SYS_socketcall. When I modified my source file like this:

#include <sys/socket.h>

#include "config.h"
#include "kernel.h"
#include "syscalls.h"
#include "lwip/tcpip.h"
#include "crt1.h"

// Make LwIP networking a select-able feature.
FEATURE_CLASS(lwip_network, network)

static int sys_accept4(int sockfd, struct sockaddr *addr, socklen_t *addrlen)
{
  return -ENOSYS;
}

#ifdef SYS_socketcall
static int sys_socketcall(int call, unsigned long *args)
{
  long arg[6];

  // Get the call arguments.
  copyin(arg, args, sizeof(arg));

  switch (call) {
  case __SC_accept4:
    return sys_accept4(args[0], (struct sockaddr *)arg[1], (socklen_t *)arg[2]);

  default:
    return -ENOSYS;
  }
}
#endif

// Define socket related system calls.
ELK_CONSTRUCTOR()
{
#ifndef SYS_socketcall
  SYSCALL(accept4);
#else
  SYSCALL(socketcall);
#endif
}

// Start up LwIP.
C_CONSTRUCTOR()
{
  tcpip_init(0, 0);
}

I’ll follow a similar pattern for the rest of the system calls. The result, with all the socket functions stubbed in, is here. Note that another little wrinkle is that at least some of the Linux ports, in this case the x86_64 port, don’t have the recv() and send() system calls. They use recvfrom() and sendto() instead.

Now I’ll start adding meat to the empty system call framework. I’ll start with the socket() system call, since it is the only way to get a socket and several design decisions will need to be made about how to interface with the rest of ELK. The normal LwIP socket descriptors index into a static array of structures containing the state of any open sockets. This means that all threads that use sockets share a socket namespace and that that socket namespace is different from the normal ELK file descriptor namespace. That is the first thing that has to change. To do that, I have to integrate LwIP sockets into the ELK VFS (Virtual File System) module. That is, a LwIP socket, when created, should result in the creation of a socket vnode. Subsequently, all operations on the socket should go through the vnode. The beauty of this approach is that socket descriptors and file descriptors will share the same namespace and that other operations that should be legal on sockets, like read() and write() (and select() when it gets implemented), will just work on all types of file descriptors as long as the low level support exists in the vnode implementation.

As I was delving into the implementation of the socket system call handling code, I realized that LwIP only implements the AF_INET and AF_INET6 domains. Since I’d like to support other domains, especially AF_UNIX, I decided to split the LwIP interface code out of the socket system call handling code. If I can set up the interfaces correctly, this should also allow other networking stacks to be dropped in in place of LwIP. So now I have network.c with the generic system call handling code and lwip_network.c with the LwIP interface glue.

Now’s the time to add error handling code to the system call handlers. This is really a stalling tactic while I’m thinking about how to integrate sockets into the existing virtual file system framework, but I suspect I’ll get some insights as I flesh out the generic code. After some thought and feverish coding, I came up with something that feels reasonable. I implemented the getsockopt() and setsockopt() system calls, which started giving me a better idea of what I need for socket integration. I’ve started to implement unix_network.c, which will implement the AF_UNIX (AF_LOCAL) domain. It looks like most of the code for the socket interface will be in network.c, with callbacks to the individual domain handlers where the semantics differ between domains. One of the interesting parts of the evolving design is that support for the various domain handlers can be specified at link time and eventually will be available as loadable modules.

Now I have much of the socket infrastructure done. I can create a socket file and open it, and use the socket() and socketpair() system calls. setsocketopt() and getsocketopt() are implemented and can be used to manipulate information in a generic socket structure. Although socket files exist in the normal virtual file system namespace, I had to add support for vnodes that don’t live in the normal space for non-file sockets. All socket operations go through a vnode however and socket file descriptors and regular file descriptors are indistinguishable. My first little test program looks like this:

[~/ellcc/examples/socket] dev% cat main.c
/* Simple socket tests.                                                                             
 */                                                                                                 
#include <sys/socket.h>                                                                             
#include <sys/stat.h>                                                                               
#include <fcntl.h>                                                                                  
#include <unistd.h>                                                                                 
#include <stdio.h>                                                                                  
#include <stdlib.h>                                                                                 
#include <errno.h>                                                                                  
#include <string.h>                                                                                 
                                                                                                    
int main(int argc, char **argv)                                                                     
{                                                                                                   
  int sv[2];                                                                                        
  int s = socketpair(AF_UNIX, SOCK_STREAM, 0, sv);                                                  
  if (s < 0) {                                                                                      
    printf("socketpair() failed: %s\n", strerror(errno));                                           
    exit(1);                                                                                        
  }

  s = write(sv[0], "hello world\n", sizeof( "hello world\n"));
  if (s < 0) {
    printf("write() failed: %s\n", strerror(errno));
  }
  char buffer[100];
  s = read(sv[1], buffer, 1);
  if (s < 0) {
    printf("read() failed: %s\n", strerror(errno));
  }

  s = mknod("/socket", S_IFSOCK|S_IRWXU, 0);
  if (s < 0) {
    printf("mknod() failed: %s\n", strerror(errno));
  }

  int fd = open("/socket", O_RDWR);
  if (fd < 0) {
    printf("open() failed: %s\n", strerror(errno));
  }
  s = read(fd, buffer, 1);
  if (s < 0) {
    printf("read() failed: %s\n", strerror(errno));
  }
}

Here is the result of running the program:

[~/ellcc/examples/socket] dev% make run
Preprocessing elkconfig.cfg
Compiling main.c
Linking socket
Running socket
enter 'control-A x' to exit QEMU
audio: Could not init `oss' audio driver
write() failed: Protocol not supported
read() failed: Protocol not supported
read() failed: Protocol not supported

Not bad for a day's work. The errors on the read() and write() calls are expected because I haven't yet implement the read and write buffers for AF_UNIX sockets, but the fact that the error returned is EPROTONOSUPPORT shows that the socket infrastructure is working as it's supposed to.

I've taken a little break to think about how I'd like to implement socket buffers. I think I'd like a design that

  • Allocates buffers a page (usually 4K) at a time.
  • Is coded to be shared between the different socket domains.
  • Expand and contract as needed.
  • Is as simple as possible.

I'm thinking that one approach would be to use two arrays of page pointers, each empty initially. The empty arrays would reside in the socket structure and be limited in size by a kernel compile time constant, maybe 64 entries each. This would allow a maximum 262,144 bytes for each of the buffers given 4K pages. The size would be controlled by the send and receive buffer size socket options, so sockets could be configured to use less memory in a memory constrained system.

It's been a busy day. I've implemented the socketpair() and bind() for AF_UNIX sockets. The buffering scheme seems to be working well, but I have to thing a bit more about how and when the buffer size can be reduced. I'm currently implementing listen() so I can move on to connect() and accept(). The nice thing about concentrating on AF_UNIX sockets first is that it is giving me a good feel for what I'll need to implement AF_INET using the LwIP stack.

It turns out that listen() in interesting. It is supposed to set up a backlog queue of pending connections, but what should that look like? I can see what it might look like for AF_UNIX sockets, but it is unclear what it should look like for remote connections. I guess my plan of implementing AF_UNIX first needs a slight diversion: I'm going to switch back to LwIP and see what a connection queue looks like there to try to come up with a common solution.

It turns out that interfacing LwIP with the current interface is pretty easy. I've had to make one change to the LwIP sources so far. I had to make the type of the socket member of the netconn structure definable at compile time. I needed this because sockets need to be represented by their socket structure pointer, not by a simple integer socket descriptor since socket file descriptors share the same descriptor namespace in different processes. I changed the declaration of the socket member in lwip/api.h to

#if LWIP_SOCKET
  LWIP_SOCKET_TYPE(socket);
#endif /* LWIP_SOCKET */

and added this definition in lwip/opt.h:

#ifndef LWIP_SOCKET_TYPE
#define LWIP_SOCKET_TYPE(name)          int name
#endif

In my lwipopts.h file I overrode the definition with

#define LWIP_SOCKET_TYPE(name) struct { int name; void *priv; }

This anonymous structure replaces the previous definition of "int socket;" with both an int and a pointer. I use the pointer to keep the higher level socket pointer for lwip_interface.c.

I've finished the first phase of LwIP integration. A few simple tests work, like in examples/socket/main.c. Next comes more extensive testing. I'm pretty happy with the way both AF_UNIX and AF_INET handling is implemented. It looks like it will be easy to drop in different stacks for these protocols or for other protocols as necessary.

Part 3 of this little saga will be about adding an Ethernet driver, so I can talk to something beyond 127.0.0.1.

Adding Networking Support to ELK: Part 1

The holidays are a great time. A little time away from my day job and I can concentrate a little on bigger ELLCC sub-projects. This holiday, I decided to concentrate on adding networking to ELK, ELLCC’s bare metal run time environment, previously mentioned here and here.

ELLCC is, for the most part, made up of other open source projects. ELLCC (the cross compilation tool chain) leverages the clang/LLVM compiler and for cross compiling C and C++ source code. I decided early on that the ELLCC run time support libraries would all have permissive BSD or BSD-like licensing so I use libc++ and libc++ABI for C++ library support, musl for standard C library support, and compiler-rt for low level processor support.

For those of you unfamiliar with ELK (which is probably all of you), I’ll give a brief synopsis of ELK’s design. The major design goals of ELK are

  • Use the standard run time libraries, compiled for Linux, unchanged in a bare metal environment.
  • Allow fine grained configuration of ELK at link time to support target environments with widely different memory and processor resources.
  • Have BSD or BSD-like licensing so that it can be used with no encumbrances for commercial and non-commercial projects.

The implications of the first goal are interesting. I wanted a fully configured ELK environment to support the POSIX environment that user space programs enjoy in kernel space. In addition, all interactions between the application and the bare metal environment would be through system calls, whether or not an MMU is present or used on the system. I can feel embedded programmers shuddering at the last statement: “What!?! A system call just to write a string to a serial port? What a waste!”. I completely understand, being an old embedded guy myself. But it turns out that there are a couple of good reasons to use the system call approach. The first is that system calls are the place where a context switch will often occur in a multi-threaded environment. Other than in the most bare metal of environments, ELK supports multi-threading and can take advantage of the context saved at the time of the system call to help implement threading. The second reason for using system calls is that modern Linux user space programs try to do system calls as infrequently as possible. For example, POSIX mutexes and semaphores are built on Linux futexes. A futex is a wonderful synchronization mechanism. The only time a system call is needed when taking a mutex, for example, is when the mutex is contested. Finally, using system calls allows ELK to be implemented by implemented system call functionality and you only need to include the system calls that your program needs. I gave a simple example of a system call definition in this post.

At the very lowest level, ELK consists of start-up code that initializes the processor and provides hooks for context switching and system call handling. Here is an example of the ARM start-up code. Above that, ELK consists of several modules, each of which provide system calls related to their functionality. The system call status page gives a snapshot of the system calls currently implemented by ELK, along with the name of the module that the system call has (or will be) implemented in. When I started working on adding networking to ELK, the set of modules supported were

  • thread – thread related system calls and data structures.
  • memman – memory management: brk(), mmap(), etc.
  • time – time related calls: clock_gettime(), nanosleep(), etc.
  • vfs – Virtual File System: File systems, device nodes, etc.
  • vm – Virtual Memory.

Some of the modules have multiple variations that can be selected at link time. For example, memman has a very simple for (supporting malloc() only), and a full blown mmap() supporting version, while a version of vm exists for both MMU and non-MMU systems. Much of the functionality of these modules were derived from the cool but seemingly abandoned Prex project.

Looking back at what I’ve written here so far, I’m guessing more that one of any potential readers are thinking “I thought this post was about adding networking support”. Well, I guess it is about time to get to the point.

I had several options for ELK networking. I could write a network stack myself and spent years debugging it or, like may other components of ELLCC, I could look around for a suitably licensed existing open source alternative. Unlike Prex, I wanted the networking code to show signs of being actively maintained and I wanted to be able to import it and updated with as little change to the source code as possible. I didn’t want to get into the business of doing ongoing maintenance of a one of a kind network stack. I finally settled on LwIP, which I had heard of over the years, but never actually used. LwIP has the right kind of license, and even though the last release was in 2012, it is being actively maintained as evidenced by this recent CERT advisory. In addition LwIP was originally designed for small, resource limited systems and is highly configurable.

LwIP consists of the core functionality, which is a single threaded network stack designed to provide a low level API providing callback functions for network events. In addition, LwIP provides two higher level APIs. The netconn API provides a multi-threaded interface by making the core functionality a thread and communicating with it via messages. Above that, LwIP also provides a Berkeley socket interface API. For ELK, I decided to use the core and netconn functionality and provide my own socket interface API that integrate fully into the existing ELK thread and vfs modules so that file descriptors and vnode interfaces would be consistent.

The first step was to get LwIP to compile within the ELK build framework. That was easy: I got the latest GIT clone and imported it into the ELK source tree, I added the core and netconn sources to ELK’s build rules and provided a couple of configuration headers and glue source files to tie it all together. Fortunately, LwIP has been ported to Linux, and ELK provides a Linux like environment, so even the glue files already existed.

I was very curious how much adding networking would add to the size of an ELK program, so I built an ELK example (http://ellcc.org/viewvc/svn/ellcc/trunk/examples/elk/) both with and without networking linked in. A full blown configuration, without networking, and with this main.c:

/* ELK running as a VM enabled OS.
 */
#include 
#include 

#include "command.h"


int main(int argc, char **argv)
{
#if 0
  // LwIP testing.
  void tcpip_init(void *, void *);
  tcpip_init(0, 0);
#endif
  setprogname("elk");
  printf("%s started. Type \"help\" for a list of commands.\n", getprogname());
  // Enter the kernel command processor.
  do_commands(getprogname());
}

Had a size like this:

[~/ellcc/examples/elk] dev% size elk
   text    data     bss     dec     hex filename
 162696    3364   64240  230300   3839c elk
[~/ellcc/examples/elk] dev%

When I enabled the LwIP initialization, I got

[~/ellcc/examples/elk] dev% size elk
   text    data     bss     dec     hex filename
 367390    4024   68428  439842   6b622 elk
[~/ellcc/examples/elk] dev%

Not bad, considering two things. First, I have just about all the LwIP bells and whistles turned on, and secondly I am compiling with no optimization. Total program size is about 100K smaller with -O3.

The other cool things is that at least at this stage, LwIP is starting with no complaint. I can run the example and see that the networking thread has been started:

[~/ellcc/examples/elk] dev% make run
Running elk
enter 'control-A x' to exit QEMU
audio: Could not init `oss' audio driver
elk started. Type "help" for a list of commands.
elk % ps
Total pages: 8192 (33554432 bytes), Free pages: 8131 (33304576 bytes)
   PID    TID       TADR STATE        PRI NAME       
     0      0 0x800683a0 RUNNING        1 kernel
     0      2 0x8006d150 IDLE           3 [idle0]
     0      3 0x80099000 SLEEPING       1 [kernel]
elk % 

That third thread (TID 3) is the network thread in all its glory. Now to make it do something.

In the next installment of this blog, I’ll describe how the ELK network module handles socket related system calls and interacts with the other ELK modules and the LwIP netconn API.

ELK: Closer to Embedded Linux Without the Linux

In a previous post I gave an update on the development status of ELK, the Embedded Little Kernel. ELK allows you to use the ELLCC tool chain to target bare metal environments. ELK is currently under development, but is available and becoming quite usable for the ARM.

ELK can be configured with a range of functionality, from a very simple “hello world” environment where you take control of everything, to a full MMU enabled virtual memory based system. In all cases, ELK uses the musl C standard library compiled for Linux so ELK can provide a very POSIX-like environment in the bare metal work (i.e. kernel space).

An example of elk in action can be found in the ELLCC source repository in the ELK example directory. You can configure the example to build four configurations:

  • Running from flash with no MMU.
  • Running from flash with virtual memory enabled.
  • Running from RAM with no MMU.
  • Running from RAM with MMU enabled.

The full ELK source code can be found here. Functionality currently supported by ELK:

  • Threading using pthread_create(). Many other thread synchronization functions are available, like POSIX mutexes and semaphores.
  • Virtual file system support, with a RAM, device, and fifo (pipe) file system supported currently.
  • A simple command processor for debugging and system testing.

ELK works by trapping and emulating Linux system calls. The current state of system call support is available on the system call status page.