netgraph man page on DragonFly

Man page or keyword search:  
man Server   44335 pages
apropos Keyword Search (all sections)
Output format
DragonFly logo
[printable version]

NETGRAPH(4)		 BSD Kernel Interfaces Manual		   NETGRAPH(4)

NAME
     netgraph — graph based kernel networking subsystem

DESCRIPTION
     The netgraph system provides a uniform and modular system for the imple‐
     mentation of kernel objects which perform various networking functions.
     The objects, known as nodes, can be arranged into arbitrarily complicated
     graphs. Nodes have hooks which are used to connect two nodes together,
     forming the edges in the graph.  Nodes communicate along the edges to
     process data, implement protocols, etc.

     The aim of netgraph is to supplement rather than replace the existing
     kernel networking infrastructure.	It provides:

       ·   A flexible way of combining protocol and link level drivers
       ·   A modular way to implement new protocols
       ·   A common framework for kernel entities to inter-communicate
       ·   A reasonably fast, kernel-based implementation

Nodes and Types
     The most fundamental concept in netgraph is that of a node.  All nodes
     implement a number of predefined methods which allow them to interact
     with other nodes in a well defined manner.

     Each node has a type, which is a static property of the node determined
     at node creation time.  A node's type is described by a unique ASCII type
     name.  The type implies what the node does and how it may be connected to
     other nodes.

     In object-oriented language, types are classes and nodes are instances of
     their respective class. All node types are subclasses of the generic node
     type, and hence inherit certain common functionality and capabilities
     (e.g., the ability to have an ASCII name).

     Nodes may be assigned a globally unique ASCII name which can be used to
     refer to the node.	 The name must not contain the characters “.” or “:”
     and is limited to NG_NODESIZ characters (including NUL byte).

     Each node instance has a unique ID number which is expressed as a 32-bit
     hex value. This value may be used to refer to a node when there is no
     ASCII name assigned to it.

Hooks
     Nodes are connected to other nodes by connecting a pair of hooks, one
     from each node. Data flows bidirectionally between nodes along connected
     pairs of hooks.  A node may have as many hooks as it needs, and may
     assign whatever meaning it wants to a hook.

     Hooks have these properties:

       ·   A hook has an ASCII name which is unique among all hooks on that
	   node (other hooks on other nodes may have the same name).  The name
	   must not contain a “.” or a “:” and is limited to NG_HOOKSIZ char‐
	   acters (including NUL byte).
       ·   A hook is always connected to another hook. That is, hooks are cre‐
	   ated at the time they are connected, and breaking an edge by remov‐
	   ing either hook destroys both hooks.

     A node may decide to assign special meaning to some hooks.	 For example,
     connecting to the hook named “debug” might trigger the node to start
     sending debugging information to that hook.

Data Flow
     Two types of information flow between nodes: data messages and control
     messages. Data messages are passed in mbuf chains along the edges in the
     graph, one edge at a time. The first mbuf in a chain must have the
     M_PKTHDR flag set. Each node decides how to handle data coming in on its
     hooks.

     Control messages are type-specific C structures sent from one node
     directly to some arbitrary other node.  Control messages have a common
     header format, followed by type-specific data, and are binary structures
     for efficiency.  However, node types also may support conversion of the
     type specific data between binary and ASCII for debugging and human
     interface purposes (see the NGM_ASCII2BINARY and NGM_BINARY2ASCII generic
     control messages below).  Nodes are not required to support these conver‐
     sions.

     There are two ways to address a control message. If there is a sequence
     of edges connecting the two nodes, the message may be “source routed” by
     specifying the corresponding sequence of hooks as the destination address
     for the message (relative addressing).  Otherwise, the recipient node
     global ASCII name (or equivalent ID based name) is used as the destina‐
     tion address for the message (absolute addressing).  The two types of
     addressing may be combined, by specifying an absolute start node and a
     sequence of hooks.

     Messages often represent commands that are followed by a reply message in
     the reverse direction. To facilitate this, the recipient of a control
     message is supplied with a “return address” that is suitable for address‐
     ing a reply.

     Each control message contains a 32 bit value called a typecookie indicat‐
     ing the type of the message, i.e., how to interpret it.  Typically each
     type defines a unique typecookie for the messages that it understands.
     However, a node may choose to recognize and implement more than one type
     of message.

Netgraph is Functional
     In order to minimize latency, most netgraph operations are functional.
     That is, data and control messages are delivered by making function calls
     rather than by using queues and mailboxes.	 For example, if node A wishes
     to send a data mbuf to neighboring node B, it calls the generic netgraph
     data delivery function. This function in turn locates node B and calls
     B's “receive data” method. While this mode of operation results in good
     performance, it has a few implications for node developers:

       ·   Whenever a node delivers a data or control message, the node may
	   need to allow for the possibility of receiving a returning message
	   before the original delivery function call returns.
       ·   Netgraph nodes and support routines generally run inside critical
	   sections.  However, some nodes may want to send data and control
	   messages from a different priority level. Netgraph supplies queue‐
	   ing routines which utilize the NETISR system to move message deliv‐
	   ery inside a critical section.  Note that messages are always
	   received from inside a critical section.
       ·   It's possible for an infinite loop to occur if the graph contains
	   cycles.

     So far, these issues have not proven problematical in practice.

Interaction With Other Parts of the Kernel
     A node may have a hidden interaction with other components of the kernel
     outside of the netgraph subsystem, such as device hardware, kernel proto‐
     col stacks, etc.  In fact, one of the benefits of netgraph is the ability
     to join disparate kernel networking entities together in a consistent
     communication framework.

     An example is the node type socket which is both a netgraph node and a
     socket(2) BSD socket in the protocol family PF_NETGRAPH.  Socket nodes
     allow user processes to participate in netgraph.  Other nodes communicate
     with socket nodes using the usual methods, and the node hides the fact
     that it is also passing information to and from a cooperating user
     process.

     Another example is a device driver that presents a node interface to the
     hardware.

Node Methods
     Nodes are notified of the following actions via function calls to the
     following node methods (all from inside critical sections) and may accept
     or reject that action (by returning the appropriate error code):

     Creation of a new node
	  The constructor for the type is called. If creation of a new node is
	  allowed, the constructor must call the generic node creation func‐
	  tion (in object-oriented terms, the superclass constructor) and then
	  allocate any special resources it needs. For nodes that correspond
	  to hardware, this is typically done during the device attach rou‐
	  tine. Often a global ASCII name corresponding to the device name is
	  assigned here as well.

     Creation of a new hook
	  The hook is created and tentatively linked to the node, and the node
	  is told about the name that will be used to describe this hook. The
	  node sets up any special data structures it needs, or may reject the
	  connection, based on the name of the hook.

     Successful connection of two hooks
	  After both ends have accepted their hooks, and the links have been
	  made, the nodes get a chance to find out who their peer is across
	  the link and can then decide to reject the connection. Tear-down is
	  automatic.

     Destruction of a hook
	  The node is notified of a broken connection. The node may consider
	  some hooks to be critical to operation and others to be expendable:
	  the disconnection of one hook may be an acceptable event while for
	  another it may affect a total shutdown for the node.

     Shutdown of a node
	  This method allows a node to clean up and to ensure that any actions
	  that need to be performed at this time are taken. The method must
	  call the generic (i.e., superclass) node destructor to get rid of
	  the generic components of the node.  Some nodes (usually associated
	  with a piece of hardware) may be persistent in that a shutdown
	  breaks all edges and resets the node, but doesn't remove it, in
	  which case the generic destructor is not called.

Sending and Receiving Data
     Three other methods are also supported by all nodes:

     Receive data message
	  An mbuf chain is passed to the node.	The node is notified on which
	  hook the data arrived, and can use this information in its process‐
	  ing decision.	 The node must must always m_freem() the mbuf chain on
	  completion or error, or pass it on to another node (or kernel mod‐
	  ule) which will then be responsible for freeing it.

	  In addition to the mbuf chain itself there is also a pointer to a
	  structure describing meta-data about the message (e.g. priority
	  information). This pointer may be NULL if there is no additional
	  information. The format for this information is described in
	  <netgraph/netgraph.h>.  The memory for meta-data must allocated via
	  malloc() with type M_NETGRAPH.  As with the data itself, it is the
	  receiver's responsibility to free() the meta-data. If the mbuf chain
	  is freed the meta-data must be freed at the same time. If the meta-
	  data is freed but the real data on is passed on, then a NULL pointer
	  must be substituted.

	  The receiving node may decide to defer the data by queueing it in
	  the netgraph NETISR system (see below).

	  The structure and use of meta-data is still experimental, but is
	  presently used in frame-relay to indicate that management packets
	  should be queued for transmission at a higher priority than data
	  packets. This is required for conformance with Frame Relay stan‐
	  dards.

     Receive queued data message
	  Usually this will be the same function as Receive data message. This
	  is the entry point called when a data message is being handed to the
	  node after having been queued in the NETISR system.  This allows a
	  node to decide in the Receive data message method that a message
	  should be deferred and queued, and be sure that when it is processed
	  from the queue, it will not be queued again.

     Receive control message
	  This method is called when a control message is addressed to the
	  node.	 A return address is always supplied, giving the address of
	  the node that originated the message so a reply message can be sent
	  anytime later.

	  It is possible for a synchronous reply to be made, and in fact this
	  is more common in practice.  This is done by setting a pointer (sup‐
	  plied as an extra function parameter) to point to the reply.	Then
	  when the control message delivery function returns, the caller can
	  check if this pointer has been made non-NULL, and if so then it
	  points to the reply message allocated via malloc() and containing
	  the synchronous response. In both directions, (request and response)
	  it is up to the receiver of that message to free() the control mes‐
	  sage buffer. All control messages and replies are allocated with
	  malloc() type M_NETGRAPH.

     Much use has been made of reference counts, so that nodes being free'd of
     all references are automatically freed, and this behaviour has been
     tested and debugged to present a consistent and trustworthy framework for
     the “type module” writer to use.

Addressing
     The netgraph framework provides an unambiguous and simple to use method
     of specifically addressing any single node in the graph. The naming of a
     node is independent of its type, in that another node, or external compo‐
     nent need not know anything about the node's type in order to address it
     so as to send it a generic message type. Node and hook names should be
     chosen so as to make addresses meaningful.

     Addresses are either absolute or relative. An absolute address begins
     with a node name, (or ID), followed by a colon, followed by a sequence of
     hook names separated by periods. This addresses the node reached by
     starting at the named node and following the specified sequence of hooks.
     A relative address includes only the sequence of hook names, implicitly
     starting hook traversal at the local node.

     There are a couple of special possibilities for the node name.  The name
     “.” (referred to as “.:”) always refers to the local node.	 Also, nodes
     that have no global name may be addressed by their ID numbers, by enclos‐
     ing the hex representation of the ID number within square brackets.  Here
     are some examples of valid netgraph addresses:

	   .:
	   foo:
	   .:hook1
	   foo:hook1.hook2
	   [f057cd80]:hook1

     Consider the following set of nodes might be created for a site with a
     single physical frame relay line having two active logical DLCI channels,
     with RFC 1490 frames on DLCI 16 and PPP frames over DLCI 20:

     [type SYNC ]		   [type FRAME]			[type RFC1490]
     [ "Frame1" ](uplink)<-->(data)[<un-named>](dlci16)<-->(mux)[<un-named>  ]
     [	  A	]		   [	B     ](dlci20)<---+	[     C	     ]
							   |
							   |	  [ type PPP ]
							   +>(mux)[<un-named>]
								  [    D     ]

     One could always send a control message to node C from anywhere by using
     the name Frame1:uplink.dlci16.  Similarly, Frame1:uplink.dlci20 could
     reliably be used to reach node D, and node A could refer to node B as
     .:uplink, or simply uplink.  Conversely, B can refer to A as data.	 The
     address mux.data could be used by both nodes C and D to address a message
     to node A.

     Note that this is only for control messages.  Data messages are routed
     one hop at a time, by specifying the departing hook, with each node mak‐
     ing the next routing decision. So when B receives a frame on hook data it
     decodes the frame relay header to determine the DLCI, and then forwards
     the unwrapped frame to either C or D.

     A similar graph might be used to represent multi-link PPP running over an
     ISDN line:

     [ type BRI ](B1)<--->(link1)[ type MPP  ]
     [	"ISDN1" ](B2)<--->(link2)[ (no name) ]
     [		](D) <-+
		       |
      +----------------+
      |
      +->(switch)[ type Q.921 ](term1)<---->(datalink)[ type Q.931 ]
		 [ (no name)  ]			      [ (no name)  ]

Netgraph Structures
     Interesting members of the node and hook structures are shown below:

     struct  ng_node {
       char    *name;		     /* Optional globally unique name */
       void    *private;	     /* Node implementation private info */
       struct  ng_type *type;	     /* The type of this node */
       int     refs;		     /* Number of references to this struct */
       int     numhooks;	     /* Number of connected hooks */
       hook_p  hooks;		     /* Linked list of (connected) hooks */
     };
     typedef struct ng_node *node_p;

     struct  ng_hook {
       char	      *name;	     /* This node's name for this hook */
       void	      *private;	     /* Node implementation private info */
       int	      refs;	     /* Number of references to this struct */
       struct ng_node *node;	     /* The node this hook is attached to */
       struct ng_hook *peer;	     /* The other hook in this connected pair */
       struct ng_hook *next;	     /* Next in list of hooks for this node */
     };
     typedef struct ng_hook *hook_p;

     The maintenance of the name pointers, reference counts, and linked list
     of hooks for each node is handled automatically by the netgraph subsys‐
     tem.  Typically a node's private info contains a back-pointer to the node
     or hook structure, which counts as a new reference that must be regis‐
     tered by incrementing node->refs.

     From a hook you can obtain the corresponding node, and from a node the
     list of all active hooks.

     Node types are described by these structures:

     /** How to convert a control message from binary <-> ASCII */
     struct ng_cmdlist {
       u_int32_t		  cookie;     /* typecookie */
       int			  cmd;	      /* command number */
       const char		  *name;      /* command name */
       const struct ng_parse_type *mesgType;  /* args if !NGF_RESP */
       const struct ng_parse_type *respType;  /* args if NGF_RESP */
     };

     struct ng_type {
       u_int32_t version;		     /* Must equal NG_VERSION */
       const  char *name;		     /* Unique type name */

       /* Module event handler */
       modeventhand_t  mod_event;	     /* Handle load/unload (optional) */

       /* Constructor */
       int    (*constructor)(node_p *node);  /* Create a new node */

       /** Methods using the node **/
       int    (*rcvmsg)(node_p node,	     /* Receive control message */
		 struct ng_mesg *msg,		     /* The message */
		 const char *retaddr,		     /* Return address */
		 struct ng_mesg **resp);	     /* Synchronous response */
       int    (*shutdown)(node_p node);	     /* Shutdown this node */
       int    (*newhook)(node_p node,	     /* create a new hook */
		 hook_p hook,			     /* Pre-allocated struct */
		 const char *name);		     /* Name for new hook */

       /** Methods using the hook **/
       int    (*connect)(hook_p hook);	     /* Confirm new hook attachment */
       int    (*rcvdata)(hook_p hook,	     /* Receive data on a hook */
		 struct mbuf *m,		     /* The data in an mbuf */
		 meta_p meta);			     /* Meta-data, if any */
       int    (*disconnect)(hook_p hook);    /* Notify disconnection of hook */

       /** How to convert control messages binary <-> ASCII */
       const struct ng_cmdlist *cmdlist;     /* Optional; may be NULL */
     };

     Control messages have the following structure:

     #define NG_CMDSTRSIZ    16	     /* Max command string (including null) */

     struct ng_mesg {
       struct ng_msghdr {
	 u_char	     version;	     /* Must equal NG_VERSION */
	 u_char	     spare;	     /* Pad to 2 bytes */
	 u_short     arglen;	     /* Length of cmd/resp data */
	 u_long	     flags;	     /* Message status flags */
	 u_long	     token;	     /* Reply should have the same token */
	 u_long	     typecookie;     /* Node type understanding this message */
	 u_long	     cmd;	     /* Command identifier */
	 u_char	     cmdstr[NG_CMDSTRSIZ]; /* Cmd string (for debug) */
       } header;
       char  data[0];		     /* Start of cmd/resp data */
     };

     #define NG_VERSION	     1		     /* Netgraph version */
     #define NGF_ORIG	     0x0000	     /* Command */
     #define NGF_RESP	     0x0001	     /* Response */

     Control messages have the fixed header shown above, followed by a vari‐
     able length data section which depends on the type cookie and the com‐
     mand. Each field is explained below:

     version
	  Indicates the version of netgraph itself. The current version is
	  NG_VERSION.

     arglen
	  This is the length of any extra arguments, which begin at data.

     flags
	  Indicates whether this is a command or a response control message.

     token
	  The token is a means by which a sender can match a reply message to
	  the corresponding command message; the reply always has the same
	  token.

     typecookie
	  The corresponding node type's unique 32-bit value.  If a node
	  doesn't recognize the type cookie it must reject the message by
	  returning EINVAL.

	  Each type should have an include file that defines the commands,
	  argument format, and cookie for its own messages.  The typecookie
	  insures that the same header file was included by both sender and
	  receiver; when an incompatible change in the header file is made,
	  the typecookie must be changed.  The de facto method for generating
	  unique type cookies is to take the seconds from the epoch at the
	  time the header file is written (i.e., the output of date -u +'%s').

	  There is a predefined typecookie NGM_GENERIC_COOKIE for the
	  “generic” node type, and a corresponding set of generic messages
	  which all nodes understand.  The handling of these messages is auto‐
	  matic.

     command
	  The identifier for the message command. This is type specific, and
	  is defined in the same header file as the typecookie.

     cmdstr
	  Room for a short human readable version of “command” (for debugging
	  purposes only).

     Some modules may choose to implement messages from more than one of the
     header files and thus recognize more than one type cookie.

Control Message ASCII Form
     Control messages are in binary format for efficiency.  However, for
     debugging and human interface purposes, and if the node type supports it,
     control messages may be converted to and from an equivalent ASCII form.
     The ASCII form is similar to the binary form, with two exceptions:

     o	  The cmdstr header field must contain the ASCII name of the command,
	  corresponding to the cmd header field.
     o	  The args field contains a NUL-terminated ASCII string version of the
	  message arguments.

     In general, the arguments field of a control message can be any arbitrary
     C data type.  Netgraph includes parsing routines to support some pre-
     defined datatypes in ASCII with this simple syntax:

     o	  Integer types are represented by base 8, 10, or 16 numbers.
     o	  Strings are enclosed in double quotes and respect the normal C lan‐
	  guage backslash escapes.
     o	  IP addresses have the obvious form.
     o	  Arrays are enclosed in square brackets, with the elements listed
	  consecutively starting at index zero.	 An element may have an
	  optional index and equals sign preceding it.	Whenever an element
	  does not have an explicit index, the index is implicitly the previ‐
	  ous element's index plus one.
     o	  Structures are enclosed in curly braces, and each field is specified
	  in the form “fieldname=value”.
     o	  Any array element or structure field whose value is equal to its
	  “default value” may be omitted. For integer types, the default value
	  is usually zero; for string types, the empty string.
     o	  Array elements and structure fields may be specified in any order.

     Each node type may define its own arbitrary types by providing the neces‐
     sary routines to parse and unparse.  ASCII forms defined for a specific
     node type are documented in the documentation for that node type.

Generic Control Messages
     There are a number of standard predefined messages that will work for any
     node, as they are supported directly by the framework itself.  These are
     defined in <netgraph/ng_message.h> along with the basic layout of mes‐
     sages and other similar information.

     NGM_CONNECT
	  Connect to another node, using the supplied hook names on either
	  end.

     NGM_MKPEER
	  Construct a node of the given type and then connect to it using the
	  supplied hook names.

     NGM_SHUTDOWN
	  The target node should disconnect from all its neighbours and shut
	  down.	 Persistent nodes such as those representing physical hardware
	  might not disappear from the node namespace, but only reset them‐
	  selves.  The node must disconnect all of its hooks.  This may result
	  in neighbors shutting themselves down, and possibly a cascading
	  shutdown of the entire connected graph.

     NGM_NAME
	  Assign a name to a node. Nodes can exist without having a name, and
	  this is the default for nodes created using the NGM_MKPEER method.
	  Such nodes can only be addressed relatively or by their ID number.

     NGM_RMHOOK
	  Ask the node to break a hook connection to one of its neighbours.
	  Both nodes will have their “disconnect” method invoked.  Either node
	  may elect to totally shut down as a result.

     NGM_NODEINFO
	  Asks the target node to describe itself. The four returned fields
	  are the node name (if named), the node type, the node ID and the
	  number of hooks attached. The ID is an internal number unique to
	  that node.

     NGM_LISTHOOKS
	  This returns the information given by NGM_NODEINFO, but in addition
	  includes an array of fields describing each link, and the descrip‐
	  tion for the node at the far end of that link.

     NGM_LISTNAMES
	  This returns an array of node descriptions (as for NGM_NODEINFO)
	  where each entry of the array describes a named node.	 All named
	  nodes will be described.

     NGM_LISTNODES
	  This is the same as NGM_LISTNAMES except that all nodes are listed
	  regardless of whether they have a name or not.

     NGM_LISTTYPES
	  This returns a list of all currently installed netgraph types.

     NGM_TEXT_STATUS
	  The node may return a text formatted status message.	The status
	  information is determined entirely by the node type.	It is the only
	  "generic" message that requires any support within the node itself
	  and as such the node may elect to not support this message. The text
	  response must be less than NG_TEXTRESPONSE bytes in length
	  (presently 1024). This can be used to return general status informa‐
	  tion in human readable form.

     NGM_BINARY2ASCII
	  This message converts a binary control message to its ASCII form.
	  The entire control message to be converted is contained within the
	  arguments field of the NGM_BINARY2ASCII message itself.  If success‐
	  ful, the reply will contain the same control message in ASCII form.
	  A node will typically only know how to translate messages that it
	  itself understands, so the target node of the NGM_BINARY2ASCII is
	  often the same node that would actually receive that message.

     NGM_ASCII2BINARY
	  The opposite of NGM_BINARY2ASCII.  The entire control message to be
	  converted, in ASCII form, is contained in the arguments section of
	  the NGM_ASCII2BINARY and need only have the flags, cmdstr, and
	  arglen header fields filled in, plus the NUL-terminated string ver‐
	  sion of the arguments in the arguments field.	 If successful, the
	  reply contains the binary version of the control message.

Metadata
     Data moving through the netgraph system can be accompanied by meta-data
     that describes some aspect of that data. The form of the meta-data is a
     fixed header, which contains enough information for most uses, and can
     optionally be supplemented by trailing option structures, which contain a
     cookie (see the section on control messages), an identifier, a length and
     optional data. If a node does not recognize the cookie associated with an
     option, it should ignore that option.

     Meta data might include such things as priority, discard eligibility, or
     special processing requirements. It might also mark a packet for debug
     status, etc. The use of meta-data is still experimental.

INITIALIZATION
     The base netgraph code may either be statically compiled into the kernel
     or else loaded dynamically as a KLD via kldload(8).  In the former case,
     include

	   options NETGRAPH

     in your kernel configuration file. You may also include selected node
     types in the kernel compilation, for example:

	   options NETGRAPH
	   options NETGRAPH_SOCKET
	   options NETGRAPH_ECHO

     Once the netgraph subsystem is loaded, individual node types may be
     loaded at any time as KLD modules via kldload(8).	Moreover, netgraph
     knows how to automatically do this; when a request to create a new node
     of unknown type type is made, netgraph will attempt to load the KLD mod‐
     ule ng_type.ko.

     Types can also be installed at boot time, as certain device drivers may
     want to export each instance of the device as a netgraph node.

     In general, new types can be installed at any time from within the kernel
     by calling ng_newtype(), supplying a pointer to the type's struct ng_type
     structure.

     The NETGRAPH_INIT() macro automates this process by using a linker set.

EXISTING NODE TYPES
     Several node types currently exist. Each is fully documented in its own
     man page:

     SOCKET
	  The socket type implements two new sockets in the new protocol
	  domain PF_NETGRAPH.  The new sockets protocols are NG_DATA and
	  NG_CONTROL, both of type SOCK_DGRAM.	Typically one of each is asso‐
	  ciated with a socket node.  When both sockets have closed, the node
	  will shut down. The NG_DATA socket is used for sending and receiving
	  data, while the NG_CONTROL socket is used for sending and receiving
	  control messages.  Data and control messages are passed using the
	  sendto(2) and recvfrom(2) calls, using a struct sockaddr_ng socket
	  address.

     HOLE
	  Responds only to generic messages and is a “black hole” for data,
	  Useful for testing. Always accepts new hooks.

     ECHO
	  Responds only to generic messages and always echoes data back
	  through the hook from which it arrived. Returns any non generic mes‐
	  sages as their own response. Useful for testing.  Always accepts new
	  hooks.

     TEE  This node is useful for “snooping”.  It has 4 hooks: left, right,
	  left2right, and right2left.  Data entering from the right is passed
	  to the left and duplicated on right2left, and data entering from the
	  left is passed to the right and duplicated on left2right.  Data
	  entering from left2right is sent to the right and data from
	  right2left to left.

     RFC1490 MUX
	  Encapsulates/de-encapsulates frames encoded according to RFC 1490.
	  Has a hook for the encapsulated packets (“downstream”) and one hook
	  for each protocol (i.e., IP, PPP, etc.).

     FRAME RELAY MUX
	  Encapsulates/de-encapsulates Frame Relay frames.  Has a hook for the
	  encapsulated packets (“downstream”) and one hook for each DLCI.

     FRAME RELAY LMI
	  Automatically handles frame relay “LMI” (link management interface)
	  operations and packets.  Automatically probes and detects which of
	  several LMI standards is in use at the exchange.

     TTY  This node is also a line discipline. It simply converts between mbuf
	  frames and sequential serial data, allowing a tty to appear as a
	  netgraph node. It has a programmable “hotkey” character.

     ASYNC
	  This node encapsulates and de-encapsulates asynchronous frames
	  according to RFC 1662. This is used in conjunction with the TTY node
	  type for supporting PPP links over asynchronous serial lines.

     INTERFACE
	  This node is also a system networking interface. It has hooks repre‐
	  senting each protocol family (IP, AppleTalk, IPX, etc.) and appears
	  in the output of ifconfig(8).	 The interfaces are named ng0, ng1,
	  etc.

NOTES
     Whether a named node exists can be checked by trying to send a control
     message to it (e.g., NGM_NODEINFO).  If it does not exist, ENOENT will be
     returned.

     All data messages are mbuf chains with the M_PKTHDR flag set.

     Nodes are responsible for freeing what they allocate.  There are three
     exceptions:

     1	   Mbufs sent across a data link are never to be freed by the sender.

     2	   Any meta-data information traveling with the data has the same
	   restriction.	 It might be freed by any node the data passes
	   through, and a NULL passed onwards, but the caller will never free
	   it.	Two macros NG_FREE_META(meta) and NG_FREE_DATA(m, meta) should
	   be used if possible to free data and meta data (see
	   <netgraph/netgraph.h>).

     3	   Messages sent using ng_send_msg() are freed by the callee. As in
	   the case above, the addresses associated with the message are freed
	   by whatever allocated them so the recipient should copy them if it
	   wants to keep that information.

FILES
     <netgraph/netgraph.h>
	    Definitions for use solely within the kernel by netgraph nodes.
     <netgraph/ng_message.h>
	    Definitions needed by any file that needs to deal with netgraph
	    messages.
     <netgraph/socket/ng_socket.h>
	    Definitions needed to use netgraph socket type nodes.
     <netgraph/{type}/ng_{type}.h>
	    Definitions needed to use netgraph {type} nodes, including the
	    type cookie definition.
     /boot/modules/netgraph.ko
	    Netgraph subsystem loadable KLD module.
     /boot/modules/ng_{type}.ko
	    Loadable KLD module for node type {type}.

USER MODE SUPPORT
     There is a library for supporting user-mode programs that wish to inter‐
     act with the netgraph system. See netgraph(3) for details.

     Two user-mode support programs, ngctl(8) and nghook(8), are available to
     assist manual configuration and debugging.

     There are a few useful techniques for debugging new node types.  First,
     implementing new node types in user-mode first makes debugging easier.
     The tee node type is also useful for debugging, especially in conjunction
     with ngctl(8) and nghook(8).

SEE ALSO
     socket(2), netgraph(3), ng_async(4), ng_bpf(4), ng_bridge(4),
     ng_cisco(4), ng_echo(4), ng_eiface(4), ng_etf(4), ng_ether(4),
     ng_frame_relay(4), ng_hole(4), ng_iface(4), ng_ksocket(4), ng_l2tp(4),
     ng_lmi(4), ng_mppc(4), ng_one2many(4), ng_ppp(4), ng_pppoe(4),
     ng_rfc1490(4), ng_socket(4), ng_tee(4), ng_tty(4), ng_UI(4), ng_vjc(4),
     ngctl(8), nghook(8)

HISTORY
     The netgraph system was designed and first implemented at Whistle Commu‐
     nications, Inc. in a version of FreeBSD 2.2 customized for the Whistle
     InterJet.	It first made its debut in the main tree in FreeBSD 3.4.

AUTHORS
     Julian Elischer ⟨julian@FreeBSD.org⟩, with contributions by Archie Cobbs
     ⟨archie@FreeBSD.org⟩.

BSD			       September 2, 2008			   BSD
[top]

List of man pages available for DragonFly

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net