Bus error

Bus error - Wikipedia, the free encyclopedia

There are two main causes of bus errors:

non-existent address
The CPU is instructed by software to read or write a specific physical memory address. Accordingly, the CPU sets this physical address on its address bus and requests all other hardware connected to the CPU to respond with the results, if they answer for this specific address. If no other hardware responds, the CPU raises an exception, stating that the requested physical address is unrecognised by the whole computer system. Note that this only covers physical memory addresses. When software tries to access an undefined virtual memory address, that is generally considered to be a segmentation fault rather than a bus error.
unaligned access
Most CPUs are byte-addressable, where each unique memory address refers to an 8-bit byte. Most CPUs can access individual bytes from each memory address, but they generally cannot access larger units (16 bits, 32 bits, 64 bits and so on) without these units being "aligned" to a specific boundary, such as 16 bits (addresses 0, 2, 4 can be accessed, addresses from 1, 3, 5, are unaligned) or 32 bits (0, 4, 8, 12 are aligned, all addresses in-between are unaligned). Attempting to access a value larger than a byte at an unaligned address can cause a bus error.

CPUs generally access data at the full width of their data bus at all times. To address bytes, they access memory at the full width of their data bus, then mask and shift to address the individual byte. This is inefficient, but tolerated as it is an essential feature for most software, especially string-processing. Unlike bytes, larger units can span two aligned addresses and would thus require more than one fetch on the data bus. It is possible for CPUs to support this, but this functionality is rarely required directly at the machine code level, thus CPU designers normally avoid implementing it and instead issue bus errors for unaligned memory access.


19 Eponymous Laws Of Software Development

19 Eponymous Laws Of Software Development

One surefire way to sound really really smart is to invoke a law or principle named after some long dead guy (an alive guy is acceptable too, but lacks slightly in smart points).

hammurapis This realization struck me the other day while I was reading a blog post that made a reference to Postel's law. Immediately I knew the author of this post must be a highly intelligent card carrying member of MENSA. He was probably sporting some geeky XKCD t-shirt with a lame unix joke while writing the post.

Well friends, I admit I had to look that law up, and in the process realized I could sound just as scary smart as that guy if I just made reference to every eponymous (I'll wait while you look that one up) "law" I could find.

And as a public service, I am going to help all of you appear smart by posting my findings here! Don't let anyone ever say I don't try to make my readers look good. If you look good, I look good.

Make sure to invoke one of these in your next blog post and sound scary smart just like me.

Postel's Law

The law that inspired this post...

Be conservative in what you send, liberal in what you accept.

Jon Postel originally articulated this as a principle for making TCP implementations robust. This principle is also embodied by HTML which many attribute as a cause of its success and failure, depending on who you ask.

In today's highly charged political environment, Postel's law is a uniter.

Parkinson's Law

Otherwise known as the law of bureaucracy, this law states that...

Work expands so as to fill the time available for its completion.

As contrasted to Haack's Law which states that

Work expands so as to overflow the time available and spill on the floor leaving a very sticky mess.

Pareto Principle

Also known as the 80-20 rule, the Pareto Principle states...

For many phenomena, 80% of consequences stem from 20% of the causes.

This is the principle behind the painful truth that 80% of the bugs in the code arise from 20% of the code. Likewise, 80% of the work done in a company is performed by 20% of the staff. The problem is you don't always have a clear idea of which 20%.

Sturgeon's Revelation

The revelation has nothing to do with seafood, as one might be mistaken to believe. Rather, it states that...

Ninety percent of everything is crud.

Sounds like Sturgeon is a conversation killer at parties. Is this a revelation because that number is so small?

The Peter Principle

One of the most depressing laws in this list, if you happen to have first-hand experience with this via working with incompetent managers.

In a hierarchy, every employee tends to rise to his level of incompetence.

Just read Dilbert (or watch The Office) to get some examples of this in action.

Hofstadter's Law

This one is great because it is so true. I knew this law and still this post still took longer than I expected.

A task always takes longer than you expect, even when you take into account Hofstadter's Law.

By the way, you get extra bonus points among your Mensa friends for invoking a self-referential law like this one.

Murphy's Law

The one we all know and love.

If anything can go wrong, it will.

Speaking of which, wait one second while I backup my computer.

The developer's response to this law should be defensive programming and the age old boy scout motto, Be Prepared .

Brook's Law

Adding manpower to a late software project makes it later.

Named after Fred Brooks, aka, Mr. Mythical Man Month. My favorite corollary to this law is the following...

The bearing of a child takes nine months, no matter how many women are assigned.

Obviously, Brook was not a statistician.

Conway's Law

Having nothing to do with country music, this law states...

Any piece of software reflects the organizational structure that produced it

Put another way...

If you have four groups working on a compiler, you'll get a 4-pass compiler.

How many groups are involved in the software you are building?

Kerchkhoff's Principle

This principle is named after a man who must be the only cryptographer ever to have five consecutive consonants in his last name.

In cryptography, a system should be secure even if everything about the system, except for a small piece of information — the key — is public knowledge.

And thus Kerchkhoff raises the banner in the fight against Security through Obscurity. This is the main principle underlying public key cryptography.

Linus's Law

Named after Linus Torvalds, the creator of Linux, this law states...

Given enough eyeballs, all bugs are shallow.

Where you store the eyeballs is up to you.

Reed's Law

The utility of large networks, particularly social networks, scales exponentially with the size of the network.

Keep repeating that to yourself as you continue to invite anyone and everyone to be your friend in FaceBook.

Metcalfe's Law

In network theory, the value of a system grows as approximately the square of the number of users of the system.

I wonder if Reed and Metcalfe hung out at the same pubs.

Moore's Law

Probably the most famous law in computing, this law states...

The power of computers per unit cost doubles every 24 month.

The more popular and well known version of Moore's law states...

The number of transistors on an integrated circuit will double in about 18 months.

And we've been racing to keep up ever since.

Rock's Law

I was unable to find Paper's Corollary, nor Scissor's Lemma, so we're left with only Rock's law which states...

The cost of a semiconductor chip fabrication plant doubles every four years.

Buy yours now while prices are still low.

Wirth's law

Software gets slower faster than hardware gets faster.

Ha! Take that Moore's Law!

Zawinski's Law

This law addresses software bloat and states...

Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can.

I hear that the next version of calc.exe is going to include the ability to read email. A more modern formulation of this law should replace email with RSS.

Fitt's Law

This is a law related to usability which states...

Time = a + b log2 ( D / S + 1 )

Or in plain English,

The time to acquire a target is a function of the distance to and the size of the target.

A well known application of this law is placing the Start menu in the bottom left corner, thus making the target very large since the corner is constrained by the left and bottom edges of the screen.

Hick's Law

Has nothing to do with people with bad mullets. I swear. Related to Fitt's law, it states that...

The time to make a decision is a function of the possible choices he or she has.

Or in plain math,

Time = b log2(n + 1)

Seems to me this is also a function of the number of people making the decision, like when you and your coworkers are trying to figure out where to have lunch.


What’s the difference between an assembler, a compiler, and an interpreter, and what’s a linker?

brian will . net » Lost in translation

brian will . net

my tech blog and the (temporary) project blog for LearnProgramming.tv

Lost in translation

Posted on July 12th, 2007 by Brian Will

A learner's guide to the terminology and concepts of software build processes.

What's the difference between an assembler, a compiler, and an interpreter, and what's a linker?

Tower of Babel


Let's start with the clearest case. An assembler is a program which translates 'assembly language' code into processor instructions ( a.k.a. 'machine instructions'/'machine code', a.k.a. 'native instructions'/'native code'). What's assembly language? 'Assembly', 'assembler', or 'asm' for short, is the generic name given to all low-level languages. Now what's a low-level language? Well, whereas in high-level languages, each line of source code typically translates into more than one processor instruction, in an assembly language, each line directly corresponds to one single processor instruction. Assembly offers the programmer exact control: what you write is exactly what gets executed, instruction-by-instruction.

Because different processors understand different sets of instructions, the assembler language you use must be particular to the processor platform you intend to run your program on. For instance, if you are targeting a processor that uses the x86 instruction set (which includes Intel and AMD processors), then you would use an x86 assembler.

So why write assembly? On the downside, writing your code one processor instruction at a time is far more tedious than writing the functionally equivalent code in a high-level language. Moreover, assembly language can't protect you from even the most basic errors and allows you to do dangerous things like trying to read memory that doesn't belong to your program (something which the OS and the processor conspire to stop your program from doing by halting your program when it tries to do such things). So not only is programming assembly like using tweezers to move a hill of sand, the tweezers are slippery and sharp. Producing complex, reasonably bug-free programs entirely in assembly is very hard and generally just hasn't been done since the late-80's.

On the upside, the exact control provided by assembly allows for optimizations simply not possible in high-level languages. While compilers and interpreters have gotten quite smart, they very, very rarely, if ever, produce the fastest possible code, leaving room for a human to do better. Again, writing a program entirely in assembly is simply too impractical given the size of most modern programs; however, if a key portion of your code is a bottleneck, it might be beneficial to rewrite that piece of code in assembly and then invoke it from your high-level language code.

Assembly retains one other important role. Some important processor instructions will never be generated by the output of a high-level language, so it is left to assembly code to allow access to those instructions. For instance, on most processors, system calls can only be invoked using a particular instruction, but there's nothing you can write in C code which will make the C compiler spit out that instruction—it's simply something (consciously) missing from the semantics of the language; therefore, to make a system call in C, a piece of assembly code that uses the system call instruction is written in a way that the code, when assembled, can be invoked from your C code. For instance, when you open a file in C with the C standard library's 'fopen' function, depending upon your implementation of C, that function either calls a function written in assembly or is itself written in assembly, and that assembly function contains the instruction to invoke the system call that opens a file. (A 'system call' is a function provided by the operating system that can't be invoked like a normal function because it exists in the operating system's protected memory space; the OS and processor conspire to protect this memory space from direct access by ordinary programs because otherwise it would be possible for ordinary programs to bring down the whole system out of incompetence or do malevolent things like read files they aren't supposed to be able to access. So, processors typically provide a system-call-invoking instruction which allows ordinary programs to invoke code at OS-defined specific addresses in the OS's protected memory space. By allowing the execution of ordinary programs to enter this memory area only at specific points, the OS can prevent any funny business.)

Assemblers used to be a much bigger deal back in the DOS days when most programmers worked in assembly, but those days are gone. Today, assembly work is rarely done except by developers of operating systems and device drivers, and whereas there used to be many assemblers for Intel-compatible processors, today there are only a few real options (on the upside, they are all now free downloads):

  • MASM (Microsoft Macro Assembler)
  • GAS (GNU Assembler)
  • FASM (Flat Assembler)
  • NASM (Netwide Assembler)

Aside from these options, some C compilers feature mechanisms to embed assembly code amongst the C code. For instance, the C compiler in the GCC (GNU Compiler Collection) allows you to embed GAS assembly code using a special directive. (Understand, this and similar mechanisms in other C and C++ compilers are not official parts of either the C or C++ languages.)

Now, whereas high-level languages, such as Java, C, or C++, are typically highly standardized, the assembler languages for a particular processor may diverge significantly in syntax, e.g. while most assemblers on the x86 platform tend to follow the syntax established by Intel in its processor manuals (with the notable exception of GAS), they still have many sizable differences.

A high-level assembler is an assembler with some high-level-language-like conveniences thrown in. MASM arguably fits into this category, but the best example is certainly HLA (High Level Assembly), an assembler language originally conceived as a teaching tool.


A compiler is a program which translates high-level language code—called the source—into some other form (usually processor instructions)—called the target. Whereas assemblers do basically a verbatim, one-to-one translation—like a translation from English to Pig-Latin—compilers typically have a considerably more sophisticated task—more like a translation from English to Latin. So whereas the whole point of assembly generally is that the programmer controls the exact sequence of instructions, compilers only guarantee that the code they spit out is functionally equivalent to the semantics expressed in the source. Moreover, compilers generally attempt to optimize the code they produce, making the end result correspond even less directly to the source.

Just as assemblers are particular to the precise assembly syntax they can translate, compilers are specific to the high-level language(s) they can translate, i.e. a compiler for the C language can translate C code but not Pascal code. Also like assemblers, compilers are particular to the processor platform(s) which they can target (except some compilers don't spit out processor instructions at all but rather some kind of 'intermediate code', as I'll discuss later).

Consider the case of the C language. Like with assembly, there used to be a wide variety of C compilers used back in the 80's and 90's, but today the market has sorted out, and there are only a few notable C compilers. The two most important are:

  • GCC (GNU Compiler Collection): Originally called the GNU C Compiler, GCC now supports many languages other than C and C++. GCC can target dozens of processor platforms, including all the most popular ones.
  • Microsoft Visual C++: Despite the name, Visual C++ supports C as well as C++. Visual C++ only targets the Intel-compatible platforms: x86, x64, and Itanium. (Technically, 'Visual C++' is actually the name of Microsoft's IDE ( Integrated Developer Environment), but there isn't a more commonly used name for Microsoft's C or C++ compilers.)


The source code of all but the smallest programs is written spread across multiple files, and in most languages, these files are treated as separate 'compilation units', i.e. they are compiled independently of each other. When a compiler produces processor instructions, the resulting code is called 'object code', and the resulting files are called 'object files' . While some operating systems, including Unix systems, will allow an object file to be run as a program ( i.e. it will happily load the file and begin execution of its instructions), this is of limited use because, to make a complete program, the object files need to be 'linked' together:

In a program, the code in one source file makes a reference to code in other files and/or is referenced by code in other files: a program is a web of source files which make external references to each other, and so the source files depend upon each other. (If a source file does not reference other files and itself does not get referenced by other files, then it can't have any effect on or be affected by the rest of the code, so it can't be said to be a part of the same program.) Still, each source file is compiled separately, meaning that, when processing one source file, the compiler has no knowledge of the files referenced by the source code; consequently, when the compiler encounters an external reference in the source code, all it can do is leave a 'stub' in the object code allowing the connection to be patched later. Patching together the external reference stubs of one object file to another is precisely the job of a linker. It is the linker that takes many object files and produces from them an executable file ( e.g. an .exe file on Windows).


Whereas assemblers and compilers translate code into other forms of code, an interpreter is a program that translates code into action, i.e. an interpreter reads code and does what it says, right then and there. If you intend your program to be run via an interpreter, then every user must have both your program and the interpreter to run it, and your program is then started by starting the interpreter and telling it to run your program. (This may sound unfriendly to naive users, but the installation and starting of the interpreter can be disguised from users such that they install and run your program like any other.)

Because interpretation happens every time you run the program as you run it, interpretation introduces a significant performance overhead. This cost can be mitigated using what I call the 'hybrid model'. First, the source code is compiled into some intermediate form ( i.e. code which is more like processor instructions than high-level code but which is not executable by the processor), and then, to run the program, an interpreter executes this intermediate code. (In this model, the linking of the compilation units is typically done by the interpreter every time the program is run.)

A further refinement of the hybrid model is to use a JIT (Just-in-time) compiler . You use a JIT compiler as you would an interpreter—you run your program by feeding the JIT compiler some form of code (usually intermediate code)—but the JIT compiler compiles code into processor instructions and runs that instead of interpreting the code. Despite the time spent to perform this compilation (typically reflected in a longer program load time), JIT compiling is usually considerably faster than using interpretation: using a JIT compiler with the hybrid model is typically only 10%-20% less performant than were the code 'natively compiled' (compiled into an executable and run as such), compared to 70-100% slower for interpreting intermediate code. [The term "performant" is used by programmers to mean 'fast performing' or 'acceptably performing', but you won't find it any dictionary—yet.] Some claim that, in a few cases, a sufficiently smart JIT compiler can run code faster than the same program compiled into an executable because the JIT compiler can make optimizations only discoverable at runtime. (The comparative performance of JIT compiling versus native compiling is a hotly debated topic. While most concede native compilation almost always produces better performance, it's debated how much of a performance hit JIT compiling introduces.)

Understand that, whether using the hybrid model or not, an interpreted program is limited by its interpreter. Just as programs executed by the OS can only do what the OS allows them to do, interpreted programs can only do what their interpreter allows them to do. This has potential security benefits: as the theory goes, users can download programs and run them in an interpreter without having to trust those programs because the interpreter can block its programs from accessing files on the system and/or using the network connection, etc. In such schemes, the interpreter is often called a VM (virtual machine) because, as far as the programs which it runs are concerned, it looks and acts much like a full computer system. In practice, truly secure virtual machines aren't quite a reality, for real VM's have bugs which malicious programs they run can exploit to breach the limitations imposed by the VM; consequently, users should still be careful of which programs they download and run, even if the program is run in a VM.

Another often-cited benefit of interpretation is that, as long as an appropriate interpreter for your language exists on all the platforms you wish to run your program on, you only need to write the program once. This is often called 'write once, run anywhere'. This argument made a bit more sense when computers were slower and so compilation took considerably longer, making compiling your program for all target platforms a bit more bothersome, but aversion to this inconvenience doesn't really explain why interpreted programs are considered so much more portable. The real reason writing your program for an interpreted environment makes it generally easier to get it working on multiple platforms is that the interpreter acts as a layer of indirection between your program and the OS, so the interpreter can handle the messy particulars of dealing with variances between OS's, e.g. the process of opening a file often differs from one OS to the other, but your program only has to tell the interpreter to open a file, and the interpreter in turn deals with the particulars of the OS.

The portability advantage of interpretation holds out as long as your program uses functionality that is available and works consistently on all of your target platforms. A notorious problem area is GUI's (Graphical User Interfaces): many GUI 'widgets' (windows, menus, scrollbars, drop-down menus, etc.) simply don't look and act the same on Windows, Macs, and Linux desktops. Attempts to provide a cross-platform means of writing GUI code have to date only been partially successful.

In principle, any language can be either interpreted or compiled, but in practice, languages are designed with a particular model in mind. For instance, were you to interpret C language code, you would defeat the purposes of using C in the first place (mainly performance and greater machine control), and so this just isn't done (though I bet someone somewhere has done it—someone somewhere has done everything, no matter how strange or daft). Another language, Java, was conceived and implemented to use the hybrid model; 'native compilers' (compilers that spit out processor instructions) for Java exist, but aren't used very often because the performance benefits generally aren't significant enough to be worth the downsides.

Thus endeth the lesson.

Tags: Programming, Learn Programming //

7 Responses to "Lost in translation"

  1. […] A brief (and good) explanation of what assembly, compilers, interpreters and virtual machines do. […]

  2. John Connors // Jul 13, 2007 at 6:55 am

    There is still quite a lot of variety of C++ compliers. There's Borland, Open Watcom, Lcc, and Intel C/C++, and it looks like the llvm crew are about to release yet another one.

  3. Brian Will // Jul 13, 2007 at 8:07 am

    @John Conners: Sure, although Borland seems pretty dead as far as I can tell. Not even a proper website.

    I did say "two most important". For learners at least, the only C/C++ compilers of real concern are Microsoft's and GCC.

  4. James Williams // Jul 13, 2007 at 9:51 am

    FYI, it's "GNU Compiler Collection", not "Connection".

  5. Brian Will // Jul 13, 2007 at 2:33 pm

    @James: Thanks. Spellcheck typo ^o^

  6. Kumar // Jul 15, 2007 at 3:34 pm

    Very well written. Clear and to the point. I couldn't have explained it better myself :)

  7. Karl // Jul 16, 2007 at 9:13 am

    Five #$%ing years of studying CS and three years of software engineering… and this is the first time I've ever read a concise overview of all types of computer language "translators" written in a single place. Thank you!

Discussion Area - Leave a Comment

Copyright © 2007 brian will . net
Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 3.0 License
Darkwater Theme by Antbag.




The Single UNIX ® Specification, Version 2
Copyright © 1997 The Open Group


nanosleep - high resolution sleep (REALTIME)


  #include <time.h>

int nanosleep(const struct timespec *rqtp , struct timespec *rmtp);


The nanosleep() function causes the current thread to be suspended from execution until either the time interval specified by the rqtp argument has elapsed or a signal is delivered to the calling thread and its action is to invoke a signal-catching function or to terminate the process. The suspension time may be longer than requested because the argument value is rounded up to an integer multiple of the sleep resolution or because of the scheduling of other activity by the system. But, except for the case of being interrupted by a signal, the suspension time will not be less than the time specified by rqtp, as measured by the system clock, CLOCK_REALTIME.

The use of the nanosleep() function has no effect on the action or blockage of any signal.


If the nanosleep() function returns because the requested time has elapsed, its return value is zero.

If the nanosleep() function returns because it has been interrupted by a signal, the function returns a value of -1 and sets errno to indicate the interruption. If the rmtp argument is non-NULL, the timespec structure referenced by it is updated to contain the amount of time remaining in the interval (the requested time minus the time actually slept). If the rmtp argument is NULL, the remaining time is not returned.

If nanosleep() fails, it returns a value of -1 and sets errno to indicate the error.


The nanosleep() function will fail if:
The nanosleep() function was interrupted by a signal.
The rqtp argument specified a nanosecond value less than zero or greater than or equal to 1000 million.
The nanosleep() function is not supported by this implementation.








sleep(), <time.h>.


Derived from the POSIX Realtime Extension (1003.1b-1993/1003.1i-1995)

UNIX ® is a registered Trademark of The Open Group.
Copyright © 1997 The Open Group
[ Main Index | XSH | XCU | XBD | XCURSES | XNS ]



The Open Group Base Specifications Issue 6
IEEE Std 1003.1, 2004 Edition
Copyright © 2001-2004 The IEEE and The Open Group, All Rights reserved.


mmap - map pages of memory


[MC3] [Option Start] #include < sys/mman.h>

void *mmap(void *
addr, size_t len, int prot, int flags,
fildes, off_t off); [Option End]


The mmap() function shall establish a mapping between a process' address space and a file, shared memory object, or [TYM] [Option Start]  typed memory object. [Option End] The format of the call is as follows:

pa=mmap(addr, len, prot, flags, fildes, off);

The mmap() function shall establish a mapping between the address space of the process at an address pa for len bytes to the memory object represented by the file descriptor fildes at offset off for len bytes. The value of pa is an implementation-defined function of the parameter addr and the values of flags, further described below. A successful mmap() call shall return pa as its result. The address range starting at pa and continuing for len bytes shall be legitimate for the possible (not necessarily current) address space of the process. The range of bytes starting at off and continuing for len bytes shall be legitimate for the possible (not necessarily current) offsets in the file, shared memory object, or [TYM] [Option Start]  typed memory object [Option End]  represented by fildes.

[TYM] [Option Start] If fildes represents a typed memory object opened with either the POSIX_TYPED_MEM_ALLOCATE flag or the POSIX_TYPED_MEM_ALLOCATE_CONTIG flag, the memory object to be mapped shall be that portion of the typed memory object allocated by the implementation as specified below. In this case, if off is non-zero, the behavior of mmap() is undefined. If fildes refers to a valid typed memory object that is not accessible from the calling process, mmap() shall fail. [Option End]

The mapping established by mmap() shall replace any previous mappings for those whole pages containing any part of the address space of the process starting at pa and continuing for len bytes.

If the size of the mapped file changes after the call to mmap() as a result of some other operation on the mapped file, the effect of references to portions of the mapped region that correspond to added or removed portions of the file is unspecified.

The mmap() function shall be supported for regular files, shared memory objects, and [TYM] [Option Start]  typed memory objects. [Option End] Support for any other type of file is unspecified.

If len is zero, mmap() shall fail and no mapping shall be established.

The parameter prot determines whether read, write, execute, or some combination of accesses are permitted to the data being mapped. The prot shall be either PROT_NONE or the bitwise-inclusive OR of one or more of the other flags in the following table, defined in the <sys/mman.h> header.

Symbolic Constant



Data can be read.


Data can be written.


Data can be executed.


Data cannot be accessed.

If an implementation cannot support the combination of access types specified by prot, the call to mmap() shall fail.

An implementation may permit accesses other than those specified by prot; [MPR] [Option Start]  however, if the Memory Protection option is supported, the implementation shall not permit a write to succeed where PROT_WRITE has not been set or shall not permit any access where PROT_NONE alone has been set. The implementation shall support at least the following values of prot: PROT_NONE, PROT_READ, PROT_WRITE, and the bitwise-inclusive OR of PROT_READ and PROT_WRITE. [Option End] If the Memory Protection option is not supported, the result of any access that conflicts with the specified protection is undefined. The file descriptor fildes shall have been opened with read permission, regardless of the protection options specified. If PROT_WRITE is specified, the application shall ensure that it has opened the file descriptor fildes with write permission unless MAP_PRIVATE is specified in the flags parameter as described below.

The parameter flags provides other information about the handling of the mapped data. The value of flags is the bitwise-inclusive OR of these options, defined in <sys/mman.h> :

Symbolic Constant



Changes are shared.


Changes are private.


Interpret addr exactly.

Implementations that do not support the Memory Mapped Files option are not required to support MAP_PRIVATE.

It is implementation-defined whether MAP_FIXED shall be supported. [XSI] [Option Start]  MAP_FIXED shall be supported on XSI-conformant systems. [Option End]

MAP_SHARED and MAP_PRIVATE describe the disposition of write references to the memory object. If MAP_SHARED is specified, write references shall change the underlying object. If MAP_PRIVATE is specified, modifications to the mapped data by the calling process shall be visible only to the calling process and shall not change the underlying object. It is unspecified whether modifications to the underlying object done after the MAP_PRIVATE mapping is established are visible through the MAP_PRIVATE mapping. Either MAP_SHARED or MAP_PRIVATE can be specified, but not both. The mapping type is retained across fork().

[TYM] [Option Start] When fildes represents a typed memory object opened with either the POSIX_TYPED_MEM_ALLOCATE flag or the POSIX_TYPED_MEM_ALLOCATE_CONTIG flag, mmap() shall, if there are enough resources available, map len bytes allocated from the corresponding typed memory object which were not previously allocated to any process in any processor that may access that typed memory object. If there are not enough resources available, the function shall fail. If fildes represents a typed memory object opened with the POSIX_TYPED_MEM_ALLOCATE_CONTIG flag, these allocated bytes shall be contiguous within the typed memory object. If fildes represents a typed memory object opened with the POSIX_TYPED_MEM_ALLOCATE flag, these allocated bytes may be composed of non-contiguous fragments within the typed memory object. If fildes represents a typed memory object opened with neither the POSIX_TYPED_MEM_ALLOCATE_CONTIG flag nor the POSIX_TYPED_MEM_ALLOCATE flag, len bytes starting at offset off within the typed memory object are mapped, exactly as when mapping a file or shared memory object. In this case, if two processes map an area of typed memory using the same off and len values and using file descriptors that refer to the same memory pool (either from the same port or from a different port), both processes shall map the same region of storage. [Option End]

When MAP_FIXED is set in the flags argument, the implementation is informed that the value of pa shall be addr, exactly. If MAP_FIXED is set, mmap() may return MAP_FAILED and set errno to [EINVAL]. If a MAP_FIXED request is successful, the mapping established by mmap() replaces any previous mappings for the process' pages in the range [pa,pa+len).

When MAP_FIXED is not set, the implementation uses addr in an implementation-defined manner to arrive at pa. The pa so chosen shall be an area of the address space that the implementation deems suitable for a mapping of len bytes to the file. All implementations interpret an addr value of 0 as granting the implementation complete freedom in selecting pa, subject to constraints described below. A non-zero value of addr is taken to be a suggestion of a process address near which the mapping should be placed. When the implementation selects a value for pa, it never places a mapping at address 0, nor does it replace any extant mapping.

The off argument is constrained to be aligned and sized according to the value returned by sysconf() when passed _SC_PAGESIZE or _SC_PAGE_SIZE. When MAP_FIXED is specified, the application shall ensure that the argument addr also meets these constraints. The implementation performs mapping operations over whole pages. Thus, while the argument len need not meet a size or alignment constraint, the implementation shall include, in any mapping operation, any partial page specified by the range [pa,pa+len).

The system shall always zero-fill any partial page at the end of an object. Further, the system shall never write out any modified portions of the last page of an object which are beyond its end. [MPR] [Option Start]  References within the address range starting at pa and continuing for len bytes to whole pages following the end of an object shall result in delivery of a SIGBUS signal. [Option End]

An implementation may generate SIGBUS signals when a reference would cause an error in the mapped object, such as out-of-space condition.

The mmap() function shall add an extra reference to the file associated with the file descriptor fildes which is not removed by a subsequent close() on that file descriptor. This reference shall be removed when there are no more mappings to the file.

The st_atime field of the mapped file may be marked for update at any time between the mmap() call and the corresponding munmap() call. The initial read or write reference to a mapped region shall cause the file's st_atime field to be marked for update if it has not already been marked for update.

The st_ctime and st_mtime fields of a file that is mapped with MAP_SHARED and PROT_WRITE shall be marked for update at some point in the interval between a write reference to the mapped region and the next call to msync() with MS_ASYNC or MS_SYNC for that portion of the file by any process. If there is no such call and if the underlying file is modified as a result of a write reference, then these fields shall be marked for update at some time after the write reference.

There may be implementation-defined limits on the number of memory regions that can be mapped (per process or per system).

[XSI] [Option Start] If such a limit is imposed, whether the number of memory regions that can be mapped by a process is decreased by the use of shmat() is implementation-defined. [Option End]

If mmap() fails for reasons other than [EBADF], [EINVAL], or [ENOTSUP], some of the mappings in the address range starting at addr and continuing for len bytes may have been unmapped.


Upon successful completion, the mmap() function shall return the address at which the mapping was placed ( pa); otherwise, it shall return a value of MAP_FAILED and set errno to indicate the error. The symbol MAP_FAILED is defined in the <sys/mman.h> header. No successful return from mmap() shall return the value MAP_FAILED.


The mmap() function shall fail if:

The fildes argument is not open for read, regardless of the protection specified, or fildes is not open for write and PROT_WRITE was specified for a MAP_SHARED type mapping.
[ML] [Option Start] The mapping could not be locked in memory, if required by mlockall (), due to a lack of resources. [Option End]
The fildes argument is not a valid open file descriptor.
The value of len is zero.
The addr argument (if MAP_FIXED was specified) or off is not a multiple of the page size as returned by sysconf() , or is considered invalid by the implementation.
The value of flags is invalid (neither MAP_PRIVATE nor MAP_SHARED is set).
The number of mapped regions would exceed an implementation-defined limit (per process or per system).
The fildes argument refers to a file whose type is not supported by mmap().
MAP_FIXED was specified, and the range [addr,addr+len) exceeds that allowed for the address space of a process; or, if MAP_FIXED was not specified and there is insufficient room in the address space to effect the mapping.
[ML] [Option Start] The mapping could not be locked in memory, if required by mlockall (), because it would require more space than the system is able to supply. [Option End]
[TYM] [Option Start] Not enough unallocated memory resources remain in the typed memory object designated by fildes to allocate len bytes. [Option End]
MAP_FIXED or MAP_PRIVATE was specified in the flags argument and the implementation does not support this functionality.

The implementation does not support the combination of accesses requested in the prot argument.

Addresses in the range [off,off+len) are invalid for the object specified by fildes.
MAP_FIXED was specified in flags and the combination of addr, len, and off is invalid for the object specified by fildes.
[TYM] [Option Start] The fildes argument refers to a typed memory object that is not accessible from the calling process. [Option End]
The file is a regular file and the value of off plus len exceeds the offset maximum established in the open file description associated with fildes.

The following sections are informative.




Use of mmap() may reduce the amount of memory available to other memory allocation functions.

Use of MAP_FIXED may result in unspecified behavior in further use of malloc() and shmat(). The use of MAP_FIXED is discouraged, as it may prevent an implementation from making the most effective use of resources.

The application must ensure correct synchronization when using mmap() in conjunction with any other file access method, such as read() and write(), standard input/output, and shmat().

The mmap() function allows access to resources via address space manipulations, instead of read()/ write(). Once a file is mapped, all a process has to do to access it is use the data at the address to which the file was mapped. So, using pseudo-code to illustrate the way in which an existing program might be changed to use mmap(), the following:

fildes = open(...)
lseek(fildes, some_offset)
read(fildes, buf, len)
/* Use data in buf. */


fildes = open(...)
address = mmap(0, len, PROT_READ, MAP_PRIVATE, fildes, some_offset)
/* Use data at address. */


After considering several other alternatives, it was decided to adopt the mmap() definition found in SVR4 for mapping memory objects into process address spaces. The SVR4 definition is minimal, in that it describes only what has been built, and what appears to be necessary for a general and portable mapping facility.

Note that while mmap() was first designed for mapping files, it is actually a general-purpose mapping facility. It can be used to map any appropriate object, such as memory, files, devices, and so on, into the address space of a process.

When a mapping is established, it is possible that the implementation may need to map more than is requested into the address space of the process because of hardware requirements. An application, however, cannot count on this behavior. Implementations that do not use a paged architecture may simply allocate a common memory region and return the address of it; such implementations probably do not allocate any more than is necessary. References past the end of the requested area are unspecified.

If an application requests a mapping that would overlay existing mappings in the process, it might be desirable that an implementation detect this and inform the application. However, the default, portable (not MAP_FIXED) operation does not overlay existing mappings. On the other hand, if the program specifies a fixed address mapping (which requires some implementation knowledge to determine a suitable address, if the function is supported at all), then the program is presumed to be successfully managing its own address space and should be trusted when it asks to map over existing data structures. Furthermore, it is also desirable to make as few system calls as possible, and it might be considered onerous to require an munmap() before an mmap() to the same address range. This volume of IEEE Std 1003.1-2001 specifies that the new mappings replace any existing mappings, following existing practice in this regard.

It is not expected, when the Memory Protection option is supported, that all hardware implementations are able to support all combinations of permissions at all addresses. When this option is supported, implementations are required to disallow write access to mappings without write permission and to disallow access to mappings without any access permission. Other than these restrictions, implementations may allow access types other than those requested by the application. For example, if the application requests only PROT_WRITE, the implementation may also allow read access. A call to mmap() fails if the implementation cannot support allowing all the access requested by the application. For example, some implementations cannot support a request for both write access and execute access simultaneously. All implementations supporting the Memory Protection option must support requests for no access, read access, write access, and both read and write access. Strictly conforming code must only rely on the required checks. These restrictions allow for portability across a wide range of hardware.

The MAP_FIXED address treatment is likely to fail for non-page-aligned values and for certain architecture-dependent address ranges. Conforming implementations cannot count on being able to choose address values for MAP_FIXED without utilizing non-portable, implementation-defined knowledge. Nonetheless, MAP_FIXED is provided as a standard interface conforming to existing practice for utilizing such knowledge when it is available.

Similarly, in order to allow implementations that do not support virtual addresses, support for directly specifying any mapping addresses via MAP_FIXED is not required and thus a conforming application may not count on it.

The MAP_PRIVATE function can be implemented efficiently when memory protection hardware is available. When such hardware is not available, implementations can implement such "mappings" by simply making a real copy of the relevant data into process private memory, though this tends to behave similarly to read().

The function has been defined to allow for many different models of using shared memory. However, all uses are not equally portable across all machine architectures. In particular, the mmap() function allows the system as well as the application to specify the address at which to map a specific region of a memory object. The most portable way to use the function is always to let the system choose the address, specifying NULL as the value for the argument addr and not to specify MAP_FIXED.

If it is intended that a particular region of a memory object be mapped at the same address in a group of processes (on machines where this is even possible), then MAP_FIXED can be used to pass in the desired mapping address. The system can still be used to choose the desired address if the first such mapping is made without specifying MAP_FIXED, and then the resulting mapping address can be passed to subsequent processes for them to pass in via MAP_FIXED. The availability of a specific address range cannot be guaranteed, in general.

The mmap() function can be used to map a region of memory that is larger than the current size of the object. Memory access within the mapping but beyond the current end of the underlying objects may result in SIGBUS signals being sent to the process. The reason for this is that the size of the object can be manipulated by other processes and can change at any moment. The implementation should tell the application that a memory reference is outside the object where this can be detected; otherwise, written data may be lost and read data may not reflect actual data in the object.

Note that references beyond the end of the object do not extend the object as the new end cannot be determined precisely by most virtual memory hardware. Instead, the size can be directly manipulated by ftruncate().

Process memory locking does apply to shared memory regions, and the MEMLOCK_FUTURE argument to mlockall() can be relied upon to cause new shared memory regions to be automatically locked.

Existing implementations of mmap() return the value -1 when unsuccessful. Since the casting of this value to type void * cannot be guaranteed by the ISO C standard to be distinct from a successful value, this volume of IEEE Std 1003.1-2001 defines the symbol MAP_FAILED, which a conforming implementation does not return as the result of a successful call.




exec(), fcntl(), fork(), lockf(), msync(), munmap(), mprotect(), posix_typed_mem_open(), shmat() , sysconf(), the Base Definitions volume of IEEE Std 1003.1-2001 , <sys/mman.h>


First released in Issue 4, Version 2.

Issue 5

Moved from X/OPEN UNIX extension to BASE.

Aligned with mmap() in the POSIX Realtime Extension as follows:

  • The DESCRIPTION is extensively reworded.

  • The [EAGAIN] and [ENOTSUP] mandatory error conditions are added.

  • New cases of [ENOMEM] and [ENXIO] are added as mandatory error conditions.

  • The value returned on failure is the value of the constant MAP_FAILED; this was previously defined as -1.

Large File Summit extensions are added.

Issue 6

The mmap() function is marked as part of the Memory Mapped Files option.

The Open Group Corrigendum U028/6 is applied, changing (void *)-1 to MAP_FAILED.

The following new requirements on POSIX implementations derive from alignment with the Single UNIX Specification:

  • The DESCRIPTION is updated to describe the use of MAP_FIXED.

  • The DESCRIPTION is updated to describe the addition of an extra reference to the file associated with the file descriptor passed to mmap().

  • The DESCRIPTION is updated to state that there may be implementation-defined limits on the number of memory regions that can be mapped.

  • The DESCRIPTION is updated to describe constraints on the alignment and size of the off argument.

  • The [EINVAL] and [EMFILE] error conditions are added.

  • The [EOVERFLOW] error condition is added. This change is to support large files.

The following changes are made for alignment with the ISO POSIX-1:1996 standard:

  • The DESCRIPTION is updated to describe the cases when MAP_PRIVATE and MAP_FIXED need not be supported.

The following changes are made for alignment with IEEE Std 1003.1j-2000:

  • Semantics for typed memory objects are added to the DESCRIPTION.

  • New [ENOMEM] and [ENXIO] errors are added to the ERRORS section.

  • The posix_typed_mem_open() function is added to the SEE ALSO section.

The DESCRIPTION is updated to avoid use of the term "must" for application requirements.

IEEE Std 1003.1-2001/Cor 1-2002, item XSH/TC1/D6/34 is applied, changing the margin code in the SYNOPSIS from MF|SHM to MC3 (notation for MF|SHM|TYM).

IEEE Std 1003.1-2001/Cor 2-2004, item XSH/TC2/D6/60 is applied, updating the DESCRIPTION and ERRORS sections to add the [EINVAL] error when len is zero.

End of informative text.

UNIX ® is a registered Trademark of The Open Group.
POSIX ® is a registered Trademark of The IEEE.
[ Main Index | XBD | XCU | XSH | XRAT ]



The Open Group Base Specifications Issue 6
IEEE Std 1003.1, 2004 Edition
Copyright © 2001-2004 The IEEE and The Open Group, All Rights reserved.


bcopy - memory operations (LEGACY)


[XSI] [Option Start] #include < strings.h>

void bcopy(const void *
s1, void *s2, size_t n); [Option End]


The bcopy() function shall copy n bytes from the area pointed to by s1 to the area pointed to by s2.

The bytes are copied correctly even if the area pointed to by s1 overlaps the area pointed to by s2.


The bcopy() function shall not return a value.


No errors are defined.

The following sections are informative.




The memmove() function is preferred over this function.

The following are approximately equivalent (note the order of the arguments):

bcopy(s1,s2,n) ˜= memmove(s2,s1,n)

For maximum portability, it is recommended to replace the function call to bcopy() as follows:

#define bcopy(b1,b2,len) (memmove((b2), (b1), (len)), (void) 0)




This function may be withdrawn in a future version.


memmove(), the Base Definitions volume of IEEE Std 1003.1-2001 , <strings.h>


First released in Issue 4, Version 2.

Issue 5

Moved from X/OPEN UNIX extension to BASE.

Issue 6

This function is marked LEGACY.

End of informative text.

UNIX ® is a registered Trademark of The Open Group.
POSIX ® is a registered Trademark of The IEEE.
[ Main Index | XBD | XCU | XSH | XRAT ]

POW - power function.

POW - power function.

POW - power function.

(ANSI Standard)


#include <math.h>
z = pow( x, y );


double x, y, z;


"pow" returns "x" to the power "y". If "x" and "y" are both zero, or if "x" is non-positive and "y" is not an integer, "pow" return -HUGE_VAL and sets "errno" to EDOM. If the answer would cause an overflow, "pow" returns +HUGE_VAL.

See Also:

expl c lib errno

Copyright © 1996, Thinkage Ltd.

UNIX Manual Page: man 3 getopt_long

UNIX Manual Page: man 3 getopt_long

GETOPT(3)           Linux Programmer's Manual           GETOPT(3)

getopt - Parse command line options

#include <unistd.h>

int getopt(int
argc, char * const argv[],
const char *optstring);

extern char *optarg;
extern int optind, opterr, optopt;

#include <getopt.h>

int getopt_long(int argc, char * const argv[],
const char *optstring,
const struct option *longopts, int *longindex);

int getopt_long_only(int argc, char * const argv[],
const char *optstring,
const struct option *longopts, int *longindex);

The getopt() function parses the command line arguments.
Its arguments argc and argv are the argument count and
array as passed to the main() function on program invoca-
tion. An element of argv that starts with `-' (and is not
exactly "-" or "--") is an option element. The characters
of this element (aside from the initial `-') are option
characters. If getopt() is called repeatedly, it returns
successively each of the option characters from each of
the option elements.

If getopt() finds another option character, it returns
that character, updating the external variable optind and
a static variable nextchar so that the next call to
getopt() can resume the scan with the following option
character or argv-element.

If there are no more option characters, getopt() returns
EOF. Then optind is the index in argv of the first argv-
element that is not an option.

optstring is a string containing the legitimate option
characters. If such a character is followed by a colon,
the option requires an argument, so getopt places a
pointer to the following text in the same argv-element, or
the text of the following argv-element, in optarg. Two
colons mean an option takes an optional arg; if there is
text in the current argv-element, it is returned in
optarg, otherwise optarg is set to zero.

By default, getargs() permutes the contents of argv as it
scans, so that eventually all the non-options are at the

GNU Aug 30, 1995 1

GETOPT(3) Linux Programmer's Manual GETOPT(3)

end. Two other modes are also implemented. If the first
character of optstring is `+' or the environment variable
POSIXLY_CORRECT is set, then option processing stops as
soon as a non-option argument is encountered. If the
first character of optstring is `-', then each non-option
argv-element is handled as if it were the argument of an
option with character code 1. (This is used by programs
that were written to expect options and other argv-ele-
ments in any order and that care about the ordering of the
two.) The special argument `--' forces an end of option-
scanning regardless of the scanning mode.

If getopt() does not recognize an option character, it
prints an error message to stderr, stores the character in
optopt , and returns `?'. The calling program may prevent
the error message by setting opterr to 0.

The getopt_long() function works like getopt() except that
it also accepts long options, started out by two dashes.
Long option names may be abbreviated if the abbreviation
is unique or is an exact match for some defined option. A
long option may take a parameter, of the form --arg=param
or --arg param.

longopts is a pointer to the first element of an array of
struct option declared in <getopt.h> as

struct option {
const char *name;
int has_arg;
int *flag;
int val;

The meanings of the different fields are:

name is the name of the long option.

is: no_argument (or 0) if the option does not take
an argument, required_argument (or 1) if the option
requires an argument, or optional_argument (or 2)
if the option takes an optional argument.

flag specifies how results are returned for a long
option. If flag is NULL, then getopt_long()
returns val. (For example, the calling program may
set val to the equivalent short option character.)
Otherwise, getopt_long() returns 0, and flag points
to a variable which is set to val if the option is
found, but left unchanged if the option is not

val is the value to return, or to load into the

GNU Aug 30, 1995 2

GETOPT(3) Linux Programmer's Manual GETOPT(3)

variable pointed to by flag.

The last element of the array has to be filled with

If longindex is not NULL, it points to a variable which is
set to the index of the long option relative to longopts.

getopt_long_only() is like getopt_long(), but `-' as well
as `--' can indicate a long option. If an option that
starts with `-' (not `--') doesn't match a long option,
but does match a short option, it is parsed as a short
option instead.

The getopt() function returns the option character if the
option was found successfully, `:' if there was a missing
parameter for one of the options, `?' for an unknown
option character, or EOF for the end of the option list.

getopt_long() and getopt_long_only() also return the
option character when a short option is recognized. For a
long option, they return val if flag is NULL, and 0 other-
wise. Error and EOF returns are the same as for getopt(),
plus `?' for an ambiguous match or an extraneous parame-

If this is set, then option processing stops as
soon as a non-option argument is encountered.

The following example program, from the source code,
illustrates the use of getopt_long() with most of its fea-

#include <stdio.h>

main (argc, argv)
int argc;
char **argv;
int c;
int digit_optind = 0;

while (1)
int this_option_optind = optind ? optind : 1;
int option_index = 0;
static struct option long_options[] =
{"add", 1, 0, 0},

GNU Aug 30, 1995 3

GETOPT(3) Linux Programmer's Manual GETOPT(3)

{"append", 0, 0, 0},
{"delete", 1, 0, 0},
{"verbose", 0, 0, 0},
{"create", 1, 0, 'c'},
{"file", 1, 0, 0},
{0, 0, 0, 0}

c = getopt_long (argc, argv, "abc:d:012",
long_options, &option_index);
if (c == -1)

switch (c)
case 0:
printf ("option %s", long_options[option_index].name);
if (optarg)
printf (" with arg %s", optarg);
printf ("\n");

case '0':
case '1':
case '2':
if (digit_optind != 0 && digit_optind != this_option_optind)
printf ("digits occur in two different argv-elements.\n");
digit_optind = this_option_optind;
printf ("option %c\n", c);

case 'a':
printf ("option a\n");

case 'b':
printf ("option b\n");

case 'c':
printf ("option c with value `%s'\n", optarg);

case 'd':
printf ("option d with value `%s'\n", optarg);

case '?':

printf ("?? getopt returned character code 0%o ??\n", c);

GNU Aug 30, 1995 4

GETOPT(3) Linux Programmer's Manual GETOPT(3)

if (optind < argc)
printf ("non-option ARGV-elements: ");
while (optind < argc)
printf ("%s ", argv[optind++]);
printf ("\n");

exit (0);

This manpage is confusing.

POSIX.1, provided the environment variable
POSIXLY_CORRECT is set. Otherwise, the elements of
argv aren't really const, because we permute them.
We pretend they're const in the prototype to be
compatible with other systems.

GNU Aug 30, 1995 5