audioctl man page on NetBSD

Man page or keyword search:  
man Server   9087 pages
apropos Keyword Search (all sections)
Output format
NetBSD logo
[printable version]

AUDIO(4)		 BSD Kernel Interfaces Manual		      AUDIO(4)

NAME
     audio — device-independent audio driver layer

SYNOPSIS
     #include <sys/audioio.h>

DESCRIPTION
     The audio driver provides support for various audio peripherals.  It pro‐
     vides a uniform programming interface layer above different underlying
     audio hardware drivers.  The audio layer provides full-duplex operation
     if the underlying hardware configuration supports it.

     There are four device files available for audio operation: /dev/audio,
     /dev/sound, /dev/audioctl, and /dev/mixer.

     /dev/audio and /dev/sound are used for recording or playback of digital
     samples.

     /dev/mixer is used to manipulate volume, recording source, or other audio
     mixer functions.

     /dev/audioctl accepts the same ioctl(2) operations as /dev/sound, but no
     other operations.

     In contrast to /dev/sound which has the exclusive open property
     /dev/audioctl can be opened at any time and can be used to manipulate the
     audio device while it is in use.

SAMPLING DEVICES
     When /dev/audio is opened, it automatically directs the underlying driver
     to manipulate monaural 8-bit mu-law samples.  In addition, if it is
     opened read-only (write-only) the device is set to half-duplex record
     (play) mode with recording (playing) unpaused and playing (recording)
     paused.  When /dev/sound is opened, it maintains the previous audio sam‐
     ple mode and record/playback mode.	 In all other respects /dev/audio and
     /dev/sound are identical.

     Only one process may hold open a sampling device at a given time
     (although file descriptors may be shared between processes once the first
     open completes).

     On a half-duplex device, writes while recording is in progress will be
     immediately discarded.  Similarly, reads while playback is in progress
     will be filled with silence but delayed to return at the current sampling
     rate.  If both playback and recording are requested on a half-duplex
     device, playback mode takes precedence and recordings will get silence.

     On a full-duplex device, reads and writes may operate concurrently with‐
     out interference.	If a full-duplex capable audio device is opened for
     both reading and writing it will start in half-duplex play mode; full-
     duplex mode has to be set explicitly.

     On either type of device, if the playback mode is paused then silence is
     played instead of the provided samples, and if recording is paused then
     the process blocks in read(2) until recording is unpaused.

     If a writing process does not call write(2) frequently enough to provide
     samples at the pace the hardware consumes them silence is inserted.  If
     the AUMODE_PLAY_ALL mode is not set the writing process must provide
     enough data via subsequent write calls to “catch up” in time to the cur‐
     rent audio block before any more process-provided samples will be played.
     If a reading process does not call read(2) frequently enough, it will
     simply miss samples.

     The audio device is normally accessed with read(2) or write(2) calls, but
     it can also be mapped into user memory with mmap(2) (when supported by
     the device).  Once the device has been mapped it can no longer be
     accessed by read or write; all access is by reading and writing to the
     mapped memory.  The device appears as a block of memory of size
     buffersize (as available via AUDIO_GETINFO or AUDIO_GETBUFINFO).  The
     device driver will continuously move data from this buffer from/to the
     audio hardware, wrapping around at the end of the buffer.	To find out
     where the hardware is currently accessing data in the buffer the
     AUDIO_GETIOFFS and AUDIO_GETOOFFS calls can be used.  The playing and
     recording buffers are distinct and must be mapped separately if both are
     to be used.  Only encodings that are not emulated (i.e. where
     AUDIO_ENCODINGFLAG_EMULATED is not set) work properly for a mapped
     device.

     The audio device, like most devices, can be used in select, can be set in
     non-blocking mode and can be set (with a FIOASYNC ioctl) to send a SIGIO
     when I/O is possible.  The mixer device can be set to generate a SIGIO
     whenever a mixer value is changed.

     The following ioctl(2) commands are supported on the sample devices:

     AUDIO_FLUSH
	     This command stops all playback and recording, clears all queued
	     buffers, resets error counters, and restarts recording and play‐
	     back as appropriate for the current sampling mode.

     AUDIO_RERROR (int)
	     This command fetches the count of dropped input samples into its
	     integer argument.	There is no information regarding when in the
	     sample stream they were dropped.

     AUDIO_WSEEK (u_long)
	     This command fetches the count of samples that are queued ahead
	     of the first sample in the most recent sample block written into
	     its integer argument.

     AUDIO_DRAIN
	     This command suspends the calling process until all queued play‐
	     back samples have been played by the hardware.

     AUDIO_GETDEV (audio_device_t)
	     This command fetches the current hardware device information into
	     the audio_device_t argument.

	     typedef struct audio_device {
		     char name[MAX_AUDIO_DEV_LEN];
		     char version[MAX_AUDIO_DEV_LEN];
		     char config[MAX_AUDIO_DEV_LEN];
	     } audio_device_t;

     AUDIO_GETFD (int)
	     The command returns the current setting of the full duplex mode.

     AUDIO_GETENC (audio_encoding_t)
	     This command is used iteratively to fetch sample encoding names
	     and format_ids into the input/output audio_encoding_t argument.

	     typedef struct audio_encoding {
		     int index;	     /* input: nth encoding */
		     char name[MAX_AUDIO_DEV_LEN]; /* name of encoding */
		     int encoding;   /* value for encoding parameter */
		     int precision;  /* value for precision parameter */
		     int flags;
	     #define AUDIO_ENCODINGFLAG_EMULATED 1 /* software emulation mode */
	     } audio_encoding_t;

	     To query all the supported encodings, start with an index field
	     of 0 and continue with successive encodings (1, 2, ...) until the
	     command returns an error.

     AUDIO_SETFD (int)
	     This command sets the device into full-duplex operation if its
	     integer argument has a non-zero value, or into half-duplex opera‐
	     tion if it contains a zero value.	If the device does not support
	     full-duplex operation, attempting to set full-duplex mode returns
	     an error.

     AUDIO_GETPROPS (int)
	     This command gets a bit set of hardware properties.  If the hard‐
	     ware has a certain property the corresponding bit is set, other‐
	     wise it is not.  The properties can have the following values:

	     AUDIO_PROP_FULLDUPLEX   the device admits full duplex operation.
	     AUDIO_PROP_MMAP	     the device can be used with mmap(2).
	     AUDIO_PROP_INDEPENDENT  the device can set the playing and
				     recording encoding parameters indepen‐
				     dently.
	     AUDIO_PROP_PLAYBACK     the device is capable of audio playback.
	     AUDIO_PROP_CAPTURE	     the device is capable of audio capture.

     AUDIO_GETIOFFS (audio_offset_t)

     AUDIO_GETOOFFS (audio_offset_t)
	     This command fetches the current offset in the input(output) buf‐
	     fer where the audio hardware's DMA engine will be putting(get‐
	     ting) data.  It mostly useful when the device buffer is available
	     in user space via the mmap(2) call.  The information is returned
	     in the audio_offset structure.

	     typedef struct audio_offset {
		     u_int   samples;	/* Total number of bytes transferred */
		     u_int   deltablks; /* Blocks transferred since last checked */
		     u_int   offset;	/* Physical transfer offset in buffer */
	     } audio_offset_t;

     AUDIO_GETINFO (audio_info_t)

     AUDIO_GETBUFINFO (audio_info_t)

     AUDIO_SETINFO (audio_info_t)
	     Get or set audio information as encoded in the audio_info struc‐
	     ture.

	     typedef struct audio_info {
		     struct  audio_prinfo play;	  /* info for play (output) side */
		     struct  audio_prinfo record; /* info for record (input) side */
		     u_int   monitor_gain;		     /* input to output mix */
		     /* BSD extensions */
		     u_int   blocksize;	     /* H/W read/write block size */
		     u_int   hiwat;	     /* output high water mark */
		     u_int   lowat;	     /* output low water mark */
		     u_int   _ispare1;
		     u_int   mode;	     /* current device mode */
	     #define AUMODE_PLAY     0x01
	     #define AUMODE_RECORD   0x02
	     #define AUMODE_PLAY_ALL 0x04    /* do not do real-time correction */
	     } audio_info_t;

	     When setting the current state with AUDIO_SETINFO, the audio_info
	     structure should first be initialized with AUDIO_INITINFO (&info)
	     and then the particular values to be changed should be set.  This
	     allows the audio driver to only set those things that you wish to
	     change and eliminates the need to query the device with
	     AUDIO_GETINFO or AUDIO_GETBUFINFO first.

	     The mode field should be set to AUMODE_PLAY, AUMODE_RECORD,
	     AUMODE_PLAY_ALL, or a bitwise OR combination of the three.	 Only
	     full-duplex audio devices support simultaneous record and play‐
	     back.

	     hiwat and lowat are used to control write behavior.  Writes to
	     the audio devices will queue up blocks until the high-water mark
	     is reached, at which point any more write calls will block until
	     the queue is drained to the low-water mark.  hiwat and lowat set
	     those high- and low-water marks (in audio blocks).	 The default
	     for hiwat is the maximum value and for lowat 75 % of hiwat.

	     blocksize sets the current audio blocksize.  The generic audio
	     driver layer and the hardware driver have the opportunity to
	     adjust this block size to get it within implementation-required
	     limits.  Upon return from an AUDIO_SETINFO call, the actual
	     blocksize set is returned in this field.  Normally the blocksize
	     is calculated to correspond to 50ms of sound and it is recalcu‐
	     lated when the encoding parameter changes, but if the blocksize
	     is set explicitly this value becomes sticky, i.e., it remains
	     even when the encoding is changed.	 The stickiness can be cleared
	     by reopening the device or setting the blocksize to 0.

	     struct audio_prinfo {
		     u_int   sample_rate;    /* sample rate in samples/s */
		     u_int   channels;	     /* number of channels, usually 1 or 2 */
		     u_int   precision;	     /* number of bits/sample */
		     u_int   encoding;	     /* data encoding (AUDIO_ENCODING_* below) */
		     u_int   gain;	     /* volume level */
		     u_int   port;	     /* selected I/O port */
		     u_long  seek;	     /* BSD extension */
		     u_int   avail_ports;    /* available I/O ports */
		     u_int   buffer_size;    /* total size audio buffer */
		     u_int   _ispare[1];
		     /* Current state of device: */
		     u_int   samples;	     /* number of samples */
		     u_int   eof;	     /* End Of File (zero-size writes) counter */
		     u_char  pause;	     /* non-zero if paused, zero to resume */
		     u_char  error;	     /* non-zero if underflow/overflow occurred */
		     u_char  waiting;	     /* non-zero if another process hangs in open */
		     u_char  balance;	     /* stereo channel balance */
		     u_char  cspare[2];
		     u_char  open;	     /* non-zero if currently open */
		     u_char  active;	     /* non-zero if I/O is currently active */
	     };

	     Note:  many hardware audio drivers require identical playback and
	     recording sample rates, sample encodings, and channel counts.
	     The playing information is always set last and will prevail on
	     such hardware.  If the hardware can handle different settings the
	     AUDIO_PROP_INDEPENDENT property is set.

	     The encoding parameter can have the following values:

	     AUDIO_ENCODING_ULAW	mu-law encoding, 8 bits/sample
	     AUDIO_ENCODING_ALAW	A-law encoding, 8 bits/sample
	     AUDIO_ENCODING_SLINEAR	two's complement signed linear encod‐
					ing with the platform byte order
	     AUDIO_ENCODING_ULINEAR	unsigned linear encoding with the
					platform byte order
	     AUDIO_ENCODING_ADPCM	ADPCM encoding, 8 bits/sample
	     AUDIO_ENCODING_SLINEAR_LE	two's complement signed linear encod‐
					ing with little endian byte order
	     AUDIO_ENCODING_SLINEAR_BE	two's complement signed linear encod‐
					ing with big endian byte order
	     AUDIO_ENCODING_ULINEAR_LE	unsigned linear encoding with little
					endian byte order
	     AUDIO_ENCODING_ULINEAR_BE	unsigned linear encoding with big
					endian byte order
	     AUDIO_ENCODING_AC3		Dolby Digital AC3

	     The gain, port and balance settings provide simple shortcuts to
	     the richer mixer interface described below and are not obtained
	     by AUDIO_GETBUFINFO.  The gain should be in the range
	     [AUDIO_MIN_GAIN, AUDIO_MAX_GAIN] and the balance in the range
	     [AUDIO_LEFT_BALANCE, AUDIO_RIGHT_BALANCE] with the normal setting
	     at AUDIO_MID_BALANCE.

	     The input port should be a combination of:

	     AUDIO_MICROPHONE  to select microphone input.
	     AUDIO_LINE_IN     to select line input.
	     AUDIO_CD	       to select CD input.

	     The output port should be a combination of:

	     AUDIO_SPEAKER    to select speaker output.
	     AUDIO_HEADPHONE  to select headphone output.
	     AUDIO_LINE_OUT   to select line output.

	     The available ports can be found in avail_ports (AUDIO_GETBUFINFO
	     only).

	     buffer_size is the total size of the audio buffer.	 The buffer
	     size divided by the blocksize gives the maximum value for hiwat.
	     Currently the buffer_size can only be read and not set.

	     The seek and samples fields are only used by AUDIO_GETINFO and
	     AUDIO_GETBUFINFO.	seek represents the count of samples pending;
	     samples represents the total number of bytes recorded or played,
	     less those that were dropped due to inadequate consumption/pro‐
	     duction rates.

	     pause returns the current pause/unpause state for recording or
	     playback.	For AUDIO_SETINFO, if the pause value is specified it
	     will either pause or unpause the particular direction.

MIXER DEVICE
     The mixer device, /dev/mixer, may be manipulated with ioctl(2) but does
     not support read(2) or write(2).  It supports the following ioctl(2) com‐
     mands:

     AUDIO_GETDEV (audio_device_t)
	     This command is the same as described above for the sampling
	     devices.

     AUDIO_MIXER_READ (mixer_ctrl_t)

     AUDIO_MIXER_WRITE (mixer_ctrl_t)
	     These commands read the current mixer state or set new mixer
	     state for the specified device dev.  type identifies which type
	     of value is supplied in the mixer_ctrl_t argument.

	     #define AUDIO_MIXER_CLASS	0
	     #define AUDIO_MIXER_ENUM	1
	     #define AUDIO_MIXER_SET	2
	     #define AUDIO_MIXER_VALUE	3
	     typedef struct mixer_ctrl {
		     int dev;			     /* input: nth device */
		     int type;
		     union {
			     int ord;		     /* enum */
			     int mask;		     /* set */
			     mixer_level_t value;    /* value */
		     } un;
	     } mixer_ctrl_t;

	     #define AUDIO_MIN_GAIN  0
	     #define AUDIO_MAX_GAIN  255
	     typedef struct mixer_level {
		     int num_channels;
		     u_char level[8];		    /* [num_channels] */
	     } mixer_level_t;
	     #define AUDIO_MIXER_LEVEL_MONO  0
	     #define AUDIO_MIXER_LEVEL_LEFT  0
	     #define AUDIO_MIXER_LEVEL_RIGHT 1

	     For a mixer value, the value field specifies both the number of
	     channels and the values for each channel.	If the channel count
	     does not match the current channel count, the attempt to change
	     the setting may fail (depending on the hardware device driver
	     implementation).  For an enumeration value, the ord field should
	     be set to one of the possible values as returned by a prior
	     AUDIO_MIXER_DEVINFO command.  The type AUDIO_MIXER_CLASS is only
	     used for classifying particular mixer device types and is not
	     used for AUDIO_MIXER_READ or AUDIO_MIXER_WRITE.

     AUDIO_MIXER_DEVINFO (mixer_devinfo_t)
	     This command is used iteratively to fetch audio mixer device
	     information into the input/output mixer_devinfo_t argument.  To
	     query all the supported devices, start with an index field of 0
	     and continue with successive devices (1, 2, ...) until the com‐
	     mand returns an error.

	     typedef struct mixer_devinfo {
		     int index;		     /* input: nth mixer device */
		     audio_mixer_name_t label;
		     int type;
		     int mixer_class;
		     int next, prev;
	     #define AUDIO_MIXER_LAST	     -1
		     union {
			     struct audio_mixer_enum {
				     int num_mem;
				     struct {
					     audio_mixer_name_t label;
					     int ord;
				     } member[32];
			     } e;
			     struct audio_mixer_set {
				     int num_mem;
				     struct {
					     audio_mixer_name_t label;
					     int mask;
				     } member[32];
			     } s;
			     struct audio_mixer_value {
				     audio_mixer_name_t units;
				     int num_channels;
				     int delta;
			     } v;
		     } un;
	     } mixer_devinfo_t;

	     The label field identifies the name of this particular mixer con‐
	     trol.  The index field may be used as the dev field in
	     AUDIO_MIXER_READ and AUDIO_MIXER_WRITE commands.  The type field
	     identifies the type of this mixer control.	 Enumeration types are
	     typically used for on/off style controls (e.g. a mute control) or
	     for input/output device selection (e.g. select recording input
	     source from CD, line in, or microphone).  Set types are similar
	     to enumeration types but any combination of the mask bits can be
	     used.

	     The mixer_class field identifies what class of control this is.
	     The (arbitrary) value set by the hardware driver may be deter‐
	     mined by examining the mixer_class field of the class itself, a
	     mixer of type AUDIO_MIXER_CLASS.  For example, a mixer control‐
	     ling the input gain on the line in circuit would have a
	     mixer_class that matches an input class device with the name
	     “inputs” (AudioCinputs), and would have a label of “line”
	     (AudioNline).  Mixer controls which control audio circuitry for a
	     particular audio source (e.g. line-in, CD in, DAC output) are
	     collected under the input class, while those which control all
	     audio sources (e.g. master volume, equalization controls) are
	     under the output class.  Hardware devices capable of recording
	     typically also have a record class, for controls that only affect
	     recording, and also a monitor class.

	     The next and prev may be used by the hardware device driver to
	     provide hints for the next and previous devices in a related set
	     (for example, the line in level control would have the line in
	     mute as its “next” value).	 If there is no relevant next or pre‐
	     vious value, AUDIO_MIXER_LAST is specified.

	     For AUDIO_MIXER_ENUM mixer control types, the enumeration values
	     and their corresponding names are filled in.  For example, a mute
	     control would return appropriate values paired with AudioNon and
	     AudioNoff.	 For AUDIO_MIXER_VALUE and AUDIO_MIXER_SET mixer con‐
	     trol types, the channel count is returned; the units name speci‐
	     fies what the level controls (typical values are AudioNvolume,
	     AudioNtreble, AudioNbass).

     By convention, all the mixer devices can be distinguished from other
     mixer controls because they use a name from one of the AudioC* string
     values.

FILES
     /dev/audio
     /dev/audioctl
     /dev/sound
     /dev/mixer

SEE ALSO
     audioctl(1), mixerctl(1), ioctl(2), ossaudio(3), midi(4), radio(4)

   ISA bus
     aria(4), ess(4), gus(4), guspnp(4), pas(4), sb(4), wss(4), ym(4)

   PCI bus
     auacer(4), auich(4), auixp(4), autri(4), auvia(4), azalia(4), clcs(4),
     clct(4), cmpci(4), eap(4), emuxki(4), esa(4), esm(4), eso(4), fms(4),
     neo(4), sv(4), yds(4)

   TURBOchannel
     bba(4)

   USB
     uaudio(4)

BUGS
     If the device is used in mmap(2) it is currently always mapped for writ‐
     ing (playing) due to VM system weirdness.

BSD			       September 5, 2011			   BSD
[top]
                             _         _         _ 
                            | |       | |       | |     
                            | |       | |       | |     
                         __ | | __ __ | | __ __ | | __  
                         \ \| |/ / \ \| |/ / \ \| |/ /  
                          \ \ / /   \ \ / /   \ \ / /   
                           \   /     \   /     \   /    
                            \_/       \_/       \_/ 
More information is available in HTML format for server NetBSD

List of man pages available for NetBSD

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net