Why not write an operating system in Io?
- Hi. I'm new to the Io community here so I hope I'm not committing a
faux pas by posting to the mailing list about a project I want to do
*with* Io rather than a question about the Io project. I think you
guys would probably get a real kick out of the idea of having an
operating system written in Io rather than running Io on a host OS -
don't you deserve to have the whole machine to yourself? - but if
you'd rather I moved this discussion elsewhere, let me know.
I've been building a homebrew computer based on a 68K chip, and also I
do a lot of homemade robots based on a variety of embedded controller
technologies. I've been looking for a language / virtual machine
combination that would allow me to write a minimalist operating system
for robot projects or embedded control systems. So I searched for a
pure, object oriented, minimalist language. I was going to write one
myself but then I found Io. It's small, pure, object oriented, and
doesn't require a huge virtual machine. Just what I need for a
virtual machine based robot/embedded controller OS.
Most people these days think of a virtual machine as something that
runs on top of the operating system, but in fact a virtual machine
makes a wonderful back-end for an operating system. You can use the
language interpreter or virtual machine to emulate any part of the
hardware which the operating system needs but which your computer of
choice does not have. You can also use the object-orientedness of the
language to protect data and modularize the OS, making it more elegant
and more robust. Finally, you can even write the operating system in
such a way that the OS could be seamlessly integrated into loaded
programs, so that your operating system routines are as easy to access
as parts of your own program.
This is not a new idea. Smalltalk (as far as I have been able to tell
from reading about it; I'm not old enough to have been there) was
intended as a comprehensive whole-computer environment, and although
none of the literature I have read actually calls it an "operating
system" per se, I think by todays standards it did a lot of what we
consider to be the job of an operating system. Case in point: the
Apple operating system's user interface was inspired by the Smalltalk
What is different about what this operating system would be is that it
would carry Io's minimalist philosophy into the design of the
operating system written in Io. For one thing, I have prototyped some
code in C# which allows you to write a component based system in such
a way that all messages between subsystems are sent to connection
points in the kernel before they go to the destination objects. What
that means is that the sender of the message is isolated from the
receiver. If you remove the receiver you don't get a null reference
exception because the connection point is still there (because the
kernel is always there). The messages are simply dropped on the
floor. That means that you can dynamically add and remove components
(such as a file system or window manager), or even hook in two
different components to the same place, without the rest of the
components even knowing that anything has changed. That way you could
easily strip down the operating system to be as small as you needed
it, or plug in features to run different hardware. By running every
subsystem in this sort of "sandbox", operating system failures are
limited to the component and can't spread to other subsystems, so if
any part crashes the kernel can simply destroy and reconstruct that
object instead of rebooting. Microsoft found that 90% of their OS
crashes (especially blue screens) came from device driver code.
Knowing that, don't you wish that your desktop operating system ran
each device driver in a sandbox?
Some people are doing some very interesting work on an operating
system called Singularity which uses C# to do some things similar to
what I just described. I don't know how they implemented it in
Singularity so I've developed my own ways to do it. Io, however,
might make a much better platform than C# for embedded computers, and
besides, it would be a helluva lot of fun.
So, why not write an operating system in Io?
- On 2/3/06, dennisf486 <dennisf486@...> wrote:
> *with* Io rather than a question about the Io project. I think youThere is a project called ioL4, which is Io ported to the L4
> guys would probably get a real kick out of the idea of having an
> operating system written in Io rather than running Io on a host OS -
microkernel. However, it is woefully out of date.
Samuel A. Falvo II
- On 03-Feb-06, at PM 08:22, dennisf486 wrote:
> I've been building a homebrew computer based on a 68K chip, and also IHi Dennis.
> do a lot of homemade robots based on a variety of embedded controller
That sounds like a fun project. How much memory does it have?
> So, why not write an operating system in Io?I'd love to see something like that. Let me know if I can help.
> Hi Dennis.Right now the plan is for 256K of static RAM. I have space on my
> That sounds like a fun project. How much memory does it have?
breadboards for more if I need it. If I implement an SD card reader
(from the data sheets it doesn't look like the protocol they use is
very difficult), I can have up to 2 or 4 gigabytes of virtual memory,
and virtual memory on an SD card should be pretty darn fast.
>Thanks. In order to do it, I'll probably have to start my own new
> I'd love to see something like that. Let me know if I can help.
> -- Steve
project off the existing Io codebase because I would have to make a
lot of modifications, and it might not be desirable or feasible to
merge those changes into the existing Io program. I hope you guys
don't mind? I want to keep language as much the same as possible but
there would probably be changes to the virtual machine that would be
specific to the case of running it by itself without a host OS and to
provide an Io runtime kernel. I have some really cool ideas I want to
try out concerning techniques for running isolated modules, wherein
failures in one part of the system would not be capable of affecting
other, isolated modules. Imagine an OS where, when something crashes,
you don't reboot your computer, the kernel just destroys and recreates
the object that crashed. To accomplish that, I have a plan for
allowing the Io code for the OS to be in modules/packages without
adding any keywords or special tags to the language. Basically the
idea is to leave the code alone and use completely separate files to
store metadata describing what packages/libraries/modules each *.io
file is related to.
I would run each Io file (module) in its own "virtual instance" of the
virtual machine. That way the Io source code remains "pure" but you
can still build bigger programs out of small files. To make it
impossible for an error in one module to crash some other module, you
would never let the modules have references that cross boundaries from
one module's space into another. Instead, you would only allow
modules to pass messages to each other and the runtime kernel would
magically make new copies of the message object and its sub-objects
within the contexts of each receiving module. The receiver then can
decide whether to retain this data by putting it in a slot, or it can
ignore the message and let it get garbage collected. Every module
runs in its own sandbox and is completely free to do as it pleases,
knowing that it is safe both from side effects caused by other modules
and from causing side effects to other modules. Message passing
through the kernel would be the only way for one module to talk to
As a bonus, with guaranteed isolated memory contexts, you could run
garbage collection on one module at a time without ever blocking the
whole system. And the collection would finish faster because you
wouldn't have to look outside the context.
Making all these gratuitous copies to ensure isolation must seem
horribly wasteful (especially on a machine with only 256K of RAM), but
copy-on-write would make it practical to do this. I wonder how hard
it would be to make the entire Io object implementation copy-on-write?
That way, if you made 100 copies of an object, nominally you have 100
obejcts in different contexts, but you would really only have 100
references to the same object. It would save memory, and it would
also make the clone method fast as hell. The assumption is that most
of the time you will not actually make changes to more than a few of
them. When you do write to an object, then you end up with 99
references to the same and 1 reference to a copy. If you wanted to
get really fancy, you could extend the garbage collector to not only
collect unused objects, but to compute hashes of objects and, when the
hashes match, check to see if the two objects can be merged into one
and update the references. Also, perhaps, if you can track who has a
reference to what, even when you do the copy-on-write, even that copy
could be a shallow copy so that you only duplicate the written object
but objects referenced by the written object are not copied.
I think all this could be implemented by having a context-local array
of object descriptor data. This object descriptor data would then in
turn point to the actual object in memory, and the descriptor data
would also have a bit indicating whether the object is owned by the
context or shared with other contexts. When a write occurs, this bit
would be checked and, if it is a shared object, a context-local copy
would be lifted from the shared object and then execution would
continue using the local copy instead.
Things get more complicated when you factor in the possibility of a
physically shared object which then wishes to access what it thinks is
another physically shared object, but in fact the object has
previously been copied and a local copy modified. For this situation,
both reads and writes would have to be examined to see if the object
being accessed has been moved to the local context. Whenever a
context-local copy exists, the local copy must be used instead.
Also, having physically shared objects makes it harder to benefit from
context-local garbage collections, and in any case would significantly
complicate the garbage collection code. On the other hand, maybe
there is a neat way to implement it that wouldn't suffer too much from
I have a feeling that an efficient implementation of copy-on-write
would somehow actually require 1 more level of indirection than what I
have specified, but I'll have to sleep on it. In principle, I can't
see any reason why the whole thing couldn't be copy on write, but
maybe there's an implementation problem I'm not seeing. It is true
that this makes execution slower, but I've traded memory efficiency
for module isolation; now I'm just talking about trading a some speed
to recoup some of the memory efficiency.
- On 2/3/06, dennisf486 <dennisf486@...> wrote:
> I've been building a homebrew computer based on a 68K chip, and also II'm building a home computer now around the 65816 actually. A
proof-of-concept SBC machine called the Kestrel 1p3 is on my site now
( http://www.falvotech.com/projects/kestrel/1p3.php ). I'm working on
the Kestrel 2 design, which is going to have 128KB of static RAM to
start with, upgradable to 15MB if you need to. :)
> Most people these days think of a virtual machine as something thatTexas Instruments did this with their GROM system. It was pretty
> runs on top of the operating system, but in fact a virtual machine
> makes a wonderful back-end for an operating system. You can use the
retarded though -- highly proprietary, and it really slowed the system
down hardcore (their VM was used to implement its BASIC interpreter,
so you have a VM interpreter running the BASIC interpreter -- uugh!).
You will want to be very careful in how you design the VM, because it
can and will impact performance significantly.
> such a way that the OS could be seamlessly integrated into loadedShades of exokernel here, where application OSes are actually
> programs, so that your operating system routines are as easy to access
> as parts of your own program.
user-level libraries that are either statically or dynamically linked.
> limited to the component and can't spread to other subsystems, so ifActually, this process is called 'micro-rebooting,' because you can
> any part crashes the kernel can simply destroy and reconstruct that
> object instead of rebooting. Microsoft found that 90% of their OS
(in the event of a component failure) just destroy the old instance of
the component, and restart it from scratch. This is known as
"Crash-only Software" -- google it -- it's pretty interesting
research. Too bad on-going research has ceased.
Even in a Linux environment, there are times when it's just plain
faster to reboot a box repeatedly than try to find the cause of a
quirky bug. Anyone who has run the Linux OS in a production and
heavily accessed environment knows that there are times when the
kernel will just wig out for no apparent reason. BSD never, EVER
seems to do this. I found a Usenet posting once where someone had
found a race between interrupts and freeing memory pages which causes
the kernel to wig out in exactly the same ways we notice at work
today. This was back in 2000. Apparently, it wasn't a big enough bug
to solve. >:(
Samuel A. Falvo II