Weblog of: Derek Keats
Blog Quick Search
Type a search term into the text box to perform simple searches through all blog posts
Follow me on Twitter Follow me on Twitter
Friend me on Facebook Add me on Facebook
Chisimba Facebook group Chisimba Facebook group


Related tweets
Login


Remember me

Forgot your password?
YES, HELP ME LOGIN

How to do a Developer install of Chisimba on Ubuntu 12.04 the easy way
819 days ago

The following instructions place your developer code in your home directory, in a directory called 'chisimba', and does the installation into /var/www/ch. This is a good standard setup that allows you to run other scripts that make your life as a developer easier. The instructions assume you have not installed Apache or messed it up in some way, but it shouldn't matter even if you have installed it.

1. Make a chisimba directory in your home directory

cd ~
mkdir chisimba
cd chisimba
 

2. Get the Chisimba installer

wget http://kengasolutions.com/downloads/install-chisimba-dev

3. Install chisimba

sudo bash install-chisimba-dev

Note that because the UWC server does not have a valid SSH certificate, the checkout requires manual intervention. This may cause the install to fail on the chceckout. If that happens, delete everything before the svn checkout, and delete the /var/www/ch directory, and run the rest of the script again while watching for the required input. Slect "P" to permanantly (for the session) accept the lack of SSH.

4. Follow the instructions in the browser when it opens.

That's it! Install the Module Builder module (makemodule), and create your first working module in > 1 minute. If you want to use skins from the canvases directory, type

cd /var/www/ch/skins
ln -s ~/chisimba/canvases/* .

OK, off you go then. Set it up.
 



Complete set of my photos from Gota Abu Ramada, Red Sea, Egypt
834 days ago

These photos are from my two week diving trip to the Egyptian Red Sea in August / Septermber 2011. We left from Port Galib, went south and then back and across to Daedalus and The Brothers, and finally back to Hurghada. This was posted in September last year, and has been restored on a new server. I will be restoring more of these as and when I get the time.

 

Gota Abu Ramada is a medium sized, oval-shaped reef with a shallow, flat, sandy seabed surrounding it. This is one of the most popular sites day boats in Hurghada, and for safari boats returning to port. We stopped there early in the morning, before the 10 000 dives a day from other divers had started. It is a very pretty site, and one of the best dives on the whole trip as the photos may help attest. Enjoy!



Emerging & Future Trends in ICT
834 days ago

Presented at the SAFIPA conference last year as the opening keynote. It was posted on dkeats.com at the time, but was deleted in the move. I am just restoring it here.

 



A tweet reminds me that we MUST teach Computer Science at school again in South Africa
841 days ago

Today, the day that my Christian friends and family call Easter Sunday, I saw a tweet by Angelica Rocha (@angie4edtech)

Nobody bothers to ask the question Papert 1st posed 45 years ago: Does the child program the PC or does the PC program the child? Tragedy!

which was retweeted by Audrey Watters.

This got me thinking about computer science in schools, which I suppose was its intent.

But first I had to dissect the question. Lately, I have been pondering a lot about the nature of questions that people ask, and the implicit assumptions as well as the emotions they contain. If you have read some of my blog posts answering specific questions about FOSS, you will see some of my thoughts on how questions can be used to create emotions or responses that have nothing to do with the answer.

In the 1960s, mathematician, computer scientist Seymour Papert postulated that a computer could allow learners to shape the way in which a computer was used, and to construct knowledge in ways and domains that would be impossible without it. Papert suggested that using a computer, learning math could be made natural and effective, like learning Zulu by living in KwaZulu-Natal as opposed to being taught Zulu in a classroom in an English speaking region. Papert posed the question "Does the child program the computer or does the computer program the child?" (there were no PCs then).

On the one level the question is meaningless, and on the other it's answer is axiomatic, but in between lies the true meaning of the question. Where does the power in the child-computer relationship lie? More importantly, it gives rise to the question "What are we doing today to shift the balance of power more in the direction of the child?"

For the first two instances of the question, the use of the "or" operator is entirely inappropriate. Of course, we need an operational definition of 'programming', since the word is inappropriately applied to the child as the technology does not deliberately construct algorithms and load them into the child for their execution. On this level, it is a meaningless question, and in this sense the first clause is true, and the second one false (C-PC  !PC-C). But that is not what Papert was on about.

In the second instance of the question, we can adopt an operational definition that defines 'programming' as creating a sequence of instructions in the case of the child programming the computer, and altering the behaviour of the child in the case of the computer programming the child. All interactions with technology are - well interactions - and the influence runs in both directions. There is no single answer to this instance of the question, as the degree to which the relationship is skewed depends on context. A child doing some deep programming task, such as writing a module for the Linux kernel has a different kind of relationship with the computer than a child playing Angry Birds.

We first need to acknowledge that all technology works like this. If id didn't we would still be living in caves and foraging for roots and fruits. It is fundamental to our nature, we give up some power to technology and its creators for the real or perceived benefit it brings. We create the technology, and then the technology alters our behaviour. Which way the influence runs depends on the nature of the technology and our relationship with it.

But neither of these meanings was what Papert had in mind. He was asking, in an obscure and metaphorical way, where does the centre of power lie in the relationship between the child and the computer.  His work led to the creation of the Logo programming language, and the kinds of things he had in mind were related to a child ACTUALLY programming a computer, something that is a rare phenomenon these days. And this is the crux of the challenge we face in building a so-called knowledge economy.

There was a time in the brief history of computing when computing was taught at school, but these days, it seems unlikely to happen very much. I have four children, two have finished school, and two are still in high school. They all took subjects that have the word computer in them (or some watered down term for computers), but not one of them have the faintest rudiments of knowledge about programming, and despite it being something I do almost every day for the pure pleasure of it, neither of them and neither of their friends have any clue about computer programming. Not even web scripting or an entry level language like logo. Not even how this text is bolded in HTML.

So, the answer, at this time in the 21st Century, using the third interpretation of the question, the answer is clearly that the power lies with the computer as a manifestation of the ideas and activities of other people who live on the other side of the asymmetric relationship. These are the people who conceived the programmes, wrote the games, designed the applications, or created the web sites and applications that are being used. There is not ONLY a relationship BETWEEN computer and child, there is a relationship AMONG computer, child, and a whole lot of other people. It is a highly skewed relationship these days for most children.

This relationship has been weakened and skewed by a combination of the technology itself (computers are easier to use because people have already programmed them) and educational policies around the world that have shifted computers from the centre to the periphery of learning. We no longer learn ABOUT computers, we learn WITH them (if we are lucky). The theoretization of computers in learning, and the creation of a pseudoscience around it has created the illusion that we still do deep stuff with computers. We mostly don't.

Of course, the phenomenon of 'backgrounding' happens with all technology. Perhaps it is a tragedy, perhaps it is natural, human. I find it unfortunate, it has happened too early with computers,  and in South Africa, we seem dead intent on propagating this misfortune in our education systems. I find it sad that we are losing opportunities for more general and widespread capabilities to tell 'the computer' what to do. In South Africa, our school-level education system seems to be increasingly based on a 'for dummies' approach.

We talk about maths and science education, yet hardly anyone is saying that to have effective maths and science education, we need computer science.  Teaching Maths and Science education without computer science is like trying to teach literacy without ANY means to produce written words.  We have a lot to say about the so-called knowledge economy, without understanding that the knowledge economy rests entirely on the work of people who are able to exert force and do work deep down in the increasingly smaller bowels of the computers that are ubiquitous in our lives. If we don't delve that deeply into computers, all South Africa will ever be is a peripheral player and consumer in the knowledge economy. We will be colonised and exploited AT LEAST as badly as the Apartheid system was exploitative.

Eben Moglen said "software is life".  A society that does not have a widespread understanding of computer science in all its forms will not understand that the implication of this is the formation of a society that is not programming the computer, but a computer that is (as a mediator) programming that society. And a programmed society is not free.

The solution? Talk about computers, not watered down phrases. Teach programming, not web browsing and Facebook posting. Take back control. Shift the balance of power in our favour. And of course, Free and Open Source Software can help make that happen. 


SInce I wrote this, the 'Related Tweets' in the sidebar have turned up some interesting links that I didn't know about. That's the bees knees!



Making a module in #Chisimba using module builder
844 days ago

I first posted this just before I moved my site and decided to be selective about what I restored. Module builder is a very powerful means for developers to get started writing code in Chisimba, so I am restoring it here. Personally, I never write a module from scratch, I always use module builder to generate a working module first. Of course I may uninstall and reinstall it many times before I have it doing close to what I want, but the fact that I can write my first line of code into a working module greatly speeds up development. I wrote the backup module for Chisimba in just over two hours this way.

I have just spent some time working on cleaning up code that was written by a group of developers, and it is absolutely incomprehensible how they could have gotten things so badly wrong. Chisimba provides so many tools to help developers create good code, but having tools available means nothing if they are not used. One of these tools, that could have made such a difference if it had been used,  is the module builder.

Module builder is a developer module within the Chisimba application framework that allows a developer to create a basic installable module in a few seconds. It allows for the  creation of modules that use JSON templates, or that use a dynamic interface built using predefined blocks. It is the best way to start a new module because it creates code that is an example of best practice, and that represents all of the Chisimba coding standards. Developers then add code to the module by following the same standards. The module is fully installable when it is created, after about 20 seconds of really hard work!

If you are developing Chisimba modules, there are other tools to help you. For example, clone-chisimba, a bash script that clones your code to create a new installation by building symlinks to one common code base. That way, you don't have to maintain multiple copies of your developer code. Another is install-chisimba, which fully sets up a developer machine from scratch provided you are running Ubuntu. You can tweak it for other distros.

I totally don't get people who who do things manually when there are scripts and other automation tools available. Even worse, is when they do things manually and create a complete and utter mess!



Enterprise #Chisimba: how to install APC for opcode caching to improve performance
844 days ago

Your Chisimba production or enterprise site will be much faster and more reliable if you install an opcode cache or accelerator for PHP. Most PHP accelerators work by caching the compiled bytecode of PHP scripts to avoid the overhead of parsing and compiling source code on each request (some or even most of which may never be executed). To further improve performance, the cached code is stored in shared memory and directly executed from there, minimizing the amount of slow disk reads and memory copying at runtime.

We recommend APC (Alternative PHP Cache), a free, open source framework that optimizes PHP intermediate code and caches data and compiled code from the PHP bytecode compiler in shared memory. See http://en.wikipedia.org/wiki/List_of_PHP_accelerators for alternatives, including a proprietary commercial one. See http://www.php.net/manual/en/intro.apc.php for more information about APC.

To install APC, you can use PECL, which should be installed on your server already if you have installed a typical LAMP stack or followed the Chisimba installation instructions or run one of our installer scripts.  PECL is a repository of PHP extensions that are made available via the PEAR packaging system. You need build-essential on your system, as well as the Perl-compatible regular expression library (PCRE). If you have not installed it, first install it with:

$ sudo apt-get install build-essential libpcre3 libpcre3-dev

The command

$ sudo pecl install extname

downloads the extension automatically, so in this case there is no need for a separate download. You can therefore install APC by running:

$ sudo pecl channel-update pecl.php.net
$ sudo pecl install apc

You will be asked:

Enable internal debugging in APC [no] :
Enable per request file info about files used from the APC cache [no] :
Enable spin locks (EXPERIMENTAL) [no] :
Enable memory protection (EXPERIMENTAL) [no] :
Enable pthread mutexes (default) [yes] :
Enable pthread read/write locks (EXPERIMENTAL) [no] :

You can select the default for all of them. You will then get a lot of things written to the terminal window, finishing with:

    Build process completed successfully
    Installing '/usr/include/php5/ext/apc/apc_serializer.h'
    Installing '/usr/lib/php5/20090626/apc.so'
      install ok: channel://pecl.php.net/APC-3.1.9
      configuration option "php_ini" is not set to php.ini location
    You should add "extension=apc.so" to php.ini

The important thing here is that if it finishes with errors, you need to do some debugging, perhaps you are missing some of the critical components. An important note here is "You should add "extension=apc.so" to php.ini". To do so,

$ vi /etc/php5/apache2/php.ini

In the future, if new versions of APC are released, you can easily upgrade them using

$sudo pecl upgrade apc

There are two primary decisions to be made configuring APC. First, how much memory is going to be allocated to APC; and second, whether APC will check if a file has been modified on every request. The two ini directives that control these settings are apc.shm_size and apc.stat, respectively. apc.shm_size defaults to "32M" and is changeable in PHP_INI_SYSTEM. apc.stat defaults to "1" and is also changeable in the PHP_INI_SYSTEM. I would leave them at their default values and monitor your installation.

However, James Scoble, who manages Chisimba installations at UWC, which runs http://www.uwc.ac.za as well as their eLearning server. He says

"we use APC with 300Mb shared RAM. Without APC enabled on the PHP level, the Portal server was running out of RAM and starting to disk-thrash."

-- James Scoble, UWC

UWC also runs multiple application servers, with load balancing due to the large volumes of traffic. On a system with 8Mb of RAM, it should be OK to set this to 256Mb or so, but tuning a server settings is really about watching how it behaves, and responding accordingly.

Once the server is running, the apc.php script that is bundled with the extension should be copied somewhere into the docroot and viewed with a browser as it provides a detailed analysis of the internal workings of APC.

$ cp /usr/share/php/apc.php /var/www/

Then open h‍ttp://yourserver.com/apc.php. If you get

No cache info available. APC does not appear to be running

then APC is not activated, and you need to take the recommended action. Edit your php.ini file as noted above, find the section that says

    ;;;;;;;;;;;;;;;;;;;;;;
    ; Dynamic Extensions ;
    ;;;;;;;;;;;;;;;;;;;;;;

and add

    extension=apc.so

immediately after that text, write the file and restart apache using

$  service apache2 restart

Open h‍ttp://yourserver.com/apc.php again and you should get some information about APC on your system, as well as some graphs. Be sure to move this file outside your web root, putting it back only when you need to use it.
 



Should be #GNU/Linux (#Linux kernel is OK) - but a good video - How Linux is made.
844 days ago

Although this video misses the tens of thousands of independent developers, testers and users  who also contribute to Linux, and focuses more on the Linux kernel than the whole GNU/Linux operating system. Still, it is a good watch.

Would be nice to see something like this covering packages. Without packages, the kernel has no work to do.



Tags for this post
This post has not been tagged

Introducing #Chisimba eLearning: how to use filters
847 days ago

Filters are a powerful means to include remote and local content into any Chisimba content. This short video just demonstrates using filters in Chisimba content.



Manually managing short URLs in #Chisimba
849 days ago

First you need to install the shorturl module. Copy the _htaccess file in the short URL module to the root of your Chisimba site as .htaccess (note the dot replacing the underscore). Enable mod_rewrite in Apache by typing 'sudo a2enmod rewrite' into a terminal. Then you need to tell apache to allow overriding paths by editing /etc/apache2/sites-enabled/000-default (or the file for your particular vhost) and adding 'AllowOverride All' to <Directory> section of the file (see image, click for larger version).

After you restart apache using 'service apache2 restart', you should be good to go. Short URL appears on the admin menu of your Chisimba site. This is a manual system of managing short URLs, you can type any URL into the editor, and a short version of it.

Here are a couple on  this site:

There are other ways to do short URLs in Chisimba, but this is the one I prefer. There is also a proposal for a more permanant system of short URLs at http://dkeats.com/staticurl - but this has not yet been implemented.



How to move a #Chisimba instance between different servers
849 days ago

It is not uncommon to move installed web applications between different physical or virtual servers. This is really a basic systems administrator task, and is not particularly special to Chisimba. Any systems administrator should know how to do this for any web appliation.

In this example, we will be moving an installation from one server to another, but the other server will still be running the same domain. For example, I moved this site from one cloud host to another, and then later moved it again. Before you begin, make sure that you have all teh chisimba files in the correct location on the new server. You can run the install-production-rackspace script, but not bother doing the actual Chisimba install in your browser. This is the procedure I used.

STEP 1. Back up the files you need to copy over.

cd /path/to/your/site (e.g. cd /var/sites/dkcom)

tar -zcf config.tar.gz config
tar -zcf usrfiles.tar.gz usrfiles
tar -zcf user_images.tar.gz user_images

STEP 2. Copy the files over to the new server using scp (secure copy)

scp config.tar.gz config root@yourdomain.com:/path/to/your/siteon/the/newhost

For example, this may be:

scp config.tar.gz config root@yourdomain.com:/var/sites/dkcom/
scp usrfiles.tar.gz config root@yourdomain.com:/var/sites/dkcom/
scp user_images.tar.gz config root@yourdomain.com:/var/sites/dkcom/

If you have a .htaccess file, you can also scp that over, for example if you run a URL shortner.

scp .htaccess config root@yourdomain.com:/var/sites/dkcom/

STEP 3. SSH into the new server and unzip them

cd /var/sites/dkcom
touch tmpinstallfile
rm installer -R

tar -zxf config.tar.gz config
tar -zxf usrfiles.tar.gz usrfiles
tar -zxf user_images.tar.gz user_images

STEP 4. Back up and restore the database

Assuming you have PHPMYADMIN installed on your original and destination servers, EXPORT your database as an SQL file, and save it to your local computer. Then open PHPMYADMIN on the destination server, and import it. Make sure that your PHP.INI settings are adequate to allow a fairly large file upload, or the import will fail.

That's it. If the DNS has propagated, open your site via its original URL on the new server and it should all be there. A word of caution though, when I did this for dkeats.com, it looked as though the ipaddress change had propagated, but I was working away on the old site! It is probably a good idea to replace index.php on the old site with something that will let you know that you are on the wrong site just in case.

Another note. If you are moving between domains, you can follow the same procedure, but you will need to use SED to change all occurrences of originaldomain.tld in the SQL file to newdomain.tld. Google 'SED command line editor' for how to do that.



FAQ: How can you say FOSS fosters innovation? Is there a test for 'most innovative'?
850 days ago

This is another of a series of posts about Free and Open Source Software (FOSS) that arise from questions that I have been asked in various FOSS intitiatives in which I have been involved. Some of these were initially posted at the KIM Blog at Wits University, and I thought it useful to redo them here so they are available more widely.  I am restoring them from my original blog because I think they are still relevant.

I was recently queried about FOSS and innovation.  The question was:

How can you say FOSS fosters innovation? Is there a test for 'most innovative'?
The first part of the question is valid, the second part is not something that I would ever claim existed, so the simple answer to the second part is 'no'.

However, before we get to why the answer is 'no', let us look at these questions that, on the surface, appear to be related even though they are independent. This type of question is one of a class of fallacies known as 'fallacies of distraction' and it is an instance of the 'complex question fallacy'. This type of question is seldom asked out of ignorance, it is almost always use by someone with malicious intent. There are two parts to the question, with the implicit idea being that if the answer to the second question is 'no' then the conclusion should be that FOSS does not foster innovation. FOSS activists should be aware of this type of fallacy, as it is a common ploy that is often used along with the red herring and the straw man fallacies when the person using it wishes to discredit FOSS before a naive audience.

Innovation is subject to suvivorship bias (you don't see the graveyard of failed innovations, only the successful ones that survive and still exist), and 'luck' plays a strong role. Furthermore, innovation is likely to be scale dependent, with an inverse probability of success as one goes up in scale. In addition, innovation is not static, and it is subject to the fact that the past is not a good predictor of the future (something that is axiomatic since something predictable is unlikely to be innovation - See 'The Black Swan' by Nassim Nicholas Taleb). For these reasons, but especially the suvivorship bias, it is almost impossible to make meaningful studies of innovation or draw meaningful conclusions.

To my knowledge nobody has ever claimed anything about FOSS being 'most innovative'. Indeed, innovative is perhaps not so much a characteristic of software, but rather of people and perhaps to some extent organisations. There are, however, certain barriers to innovation that FOSS reduces or eliminates, and this allows people to innovate. There are other barriers that affect innovation, such as access to capital for example, that are not impacted directly by FOSS. Most of these barriers are self-evident and obvious, rather like the statement that an open door is easier to pass through than a closed door. Some of the barriers inherent in proprietary software that FOSS lowers or eliminates include:

  • Availability of starting points
  • Access to knowledge
  • Software Permissions
  • Software and other initial and ongoing costs

 

One of the great things about working in a FOSS ecosystem is that one seldom has to start from scratch. there is a wide variety of software available in different layers from the operating system, programming languages and tools, web server, database, components and full applications. These can be used without any requirement to do anything in particular, including ask for permissions because those permissions are explicitly given in the license. For example, when we started Chisimba as part of the African Virtual Open Initiatives and Resources (AVOIR) project, we built it on the GNU/Linux operating systen, the Apache web server, the PHP programming language, as well as a large number of other building blocks that went into it.

Access to knowledge is a key component of how FOSS lowers the barriers to innovation. When you have an idea, limited coding experience, and few resources, how do you learn to code it? Excellent software is available to study, and if in studying it you use it for something, that is OK, because the freedoms of Free Sofware mean you are free to study, adapt, modify and distribute. Free Software as a learning resource not just because its source is available, but because often the source has been designed or developed by some of the best programmers in the world. Aside from the source code, the community associated with most FOSS projects is itself a learning resource, especially for those willing to jump on the mailing list and ask informed questions.

Software and other initial and ongoing costs are reduced by using FOSS. There are a number of cost areas that are impacted by FOSS including:

  • Start-up costs;
  • Scaling out costs;
  • Lock-in costs;
  • Maleability costs;
  • Abandonment costs;
  • Uncertainty of costs.

With FOSS, you can just grab the building blocks and development environment you need, and get started. Whether you have one developer or many, the cost of these building blocks is the same: nil. When you want to scale out, you do not have to worry about purchasing additional licenses, you just do it. There are no lock-in costs, and where there are alternative technologies to use, it is usually pretty simple to change. There are no proprietary lock-in mechanisms in FOSS. One of the costs that is often overlooked is what I call maleability costs. When you are starting something, you often want to try out different application stacks and ways of doing something. ... Then there is the uncertainty of what you will need if your application needs to scale, and predicting future costs for complicated, proprietary licenses is almost impossible.

Many of the most innovative and valuable companies today got started by building on this stack of existing FOSS applications, including Google, Facebook, Twitter, and others. Google was founded by Larry Page and Sergey Brin while they were students at Stanford University. They maxed out their credit cards to buy hardware, and would certainly not have been able to have gotten the company off the ground if they needed to worry about software license fees and the general uncertainty of proprietary license mindfields. Google became a private company on September 4, 1998, and Larry & Sergey made #5 on Forbes list in 1997 with a net worth of $18.5 billion each.
Another Ingernet success story is Facebook launched by Mark Zuckerberg from his Harvard University dorm room on February 4, 2004. Facebook was built entirely on a FOSS application stack, mainly because the lack of barriers allowed Zuckerberg and his friends to just experiment. After he built a global company on a FOSS platform, Time magazine named Zuckerberg as one of The World's Most Influential People of 2008, with Facebook being ascribed a Market value of $15 billion in 2007. Faceook continues to contribute to FOSS, and has recently also published open hardware specifications for its data centre hardware.
Closer to home in South Africa, when Mark Shuttleworth started Thawte in the 1990s, originally run from his parents' garage in Durbanville. He built the applications on a stack of existing FOSS applications, with the secure server being an adaptation of the Apache web server server. Before being sold, Thawte captured 50% of the world's digital certificate market, with VeriSign having the other half. In 1999 VeriSign acquired Thawte in a stock purchase from Shuttleworth for US$575 million.

Lowing the barriers to innovation is the basis for Dr Sibusisu Sibisi's discussion on FOSS (as FLOSS) and national innovation systems in the NACI document. It is well worth a read. On page 7, the point is made that by expanding the scope for local innovation, an open source development environment allows local enterprises both to germinate, and to move up the international ICT knowledge value chain. This is only possible because the barriers are lower.

 

Vint Cerf, one of the co-creators of TCP/IP that powers the Internet, gave a recorded video at the Digital Freedom Expo in Cape Town a few years ago, and he pointed out how the importance of keeping TCP/IP free had led to the innovations that we now collectively know as the Internet. "Keeping important things Free and Open has been vital to the development of the Internet, and is likely to be an valuable contributor to development in Africa because when core things are Free and Open there are no barriers to innovation." We can look at this effect in the chain of things that led to the creation of Google, and its expansion as a global company.

 

The Austrian-American economist Joseph Schumpeter is responsible for many of the ways we think about innovation. Schumpeter postulated innovation as a critical dimension of economic change, and argued that innovation was substantially if not exclusively the purvey of the firm. He argued that temporary monopolies were necessary to provide the incentive necessary for firms to develop new products and processes. More recently, Eric von Hippel -- is a professor at the MIT Sloan School of Management specializing in the nature and economics of distributed and open innovation -- has provided an alternative view of innovation. von Hippel developed the concept of user innovation – that end-users, rather than manufacturers, are responsible for a large amount of new innovation. In von Hippel's view, individuals, firms (where software is not their main product) and organisations can all be 'user innovators'. By virtue of the lower barriers, in software space, FOSS is a major contributer to user innovation and this is the subject of a body of empirical research by von Hippel and his students. Schumpeterian innovation.

One of the consequences is that nearly all of the innovative IT-based companies of the last 10 years have been built on FOSS. The small few that have not were created by companies which already had a large investment in proprietary technology (so could eliminate their own barriers), or were funded by them. A relatively few do not fit into these categories. However, these results are also obviously subject to survivorship bias, so other than observation, one really cannot draw 'because' conclusions from them.



Creating course content in #Chisimba eLearning
850 days ago

This is the third howto video about using Chisimba in eLearning. This one gives you a brief overview of using the built-in content creation tools inside a course.



Chisimba core framework 3.3.1 released by Kenga Solutions team
851 days ago

The Kenga Solutions team are pleased to release version 3.3.1 of the Chisimba core framework. This is a heavily tested version of Chisimba, and the first release in over a year. It provides for numerous bug fixes, and enhancements. Most development during the past 2 years has been outside the core, in the modules (packages), a stable release of which will be made during the coming weeks.

To install Chisimba 3.3.1 on Ubuntu, proceed as follows.

If you do not yet have Apache, MySQL and all the required libraries, run the following commands in a terminal:

sudo apt-get install subversion apache2 mysql-server mysql-client php5 php5-mysql php5-imap php5-gd php5-curl php-pear php5-imagick php5-imap php5-ldap php5-mapscript php5-mcrypt php5-memcache php5-mysql php5-pgsql php5-pspell php5-snmp php5-sqlite php5-tidy php5-uuid php5-xmlrpc php5-xsl

sudo pear channel-update pear.php.net
sudo pear upgrade pear
sudo pear upgrade-all
sudo pear install --alldeps -f Config Log
sudo pear install MDB2
sudo pear install MDB2_Driver_mysql
sudo pear install MDB2_Driver_pgsql
sudo pear install MDB2_Driver_mysqli
sudo pear install  --alldeps -f  MDB2_Schema


Then do the following:

1. wget http://kengasolutions.com/usrfiles/users/6258120112/releases/chisimba_3.3.1.tar.gz
2. Unzip in the web root or a directory of your choice (e.g. var/www/chisimba)
3. Open http://site.domain.tld/chisimba in your browser

Install as per the instructions in the browser.

NOTE: There is as yet no stable install for modules (packages). You will either have to install them from a subversion checkout, or use the built-in module catalougue to install them over the web. A stable package server will be available for this in about 10 days (or sooner), meanwhile, module catalogue points to chisimba.com which has the subversion files in a package server.



Using the course control panel in #Chisimba eLearning
852 days ago

The control panel is one of the first tools you need to get to know after you create a course in an eLearning instance of Chisimba. The video below will take you through the basics of the control panel for your course.



Creating an eLearning Course in an eLearning instance of #Chisimba
852 days ago

Chisimba had powerful eLearning tools. With Chisimba, you can have a cloud-based eLearning site up and running in <10 minutes, and your first course online about a minute after that. The video below shows how to create your first course, assuming you already have as site up and running.



How to use the 'Clean Slate' module in #Chisimba to create a unique landing page
866 days ago

This howto will show you how to use the Clean Slate module in Chisimba to create a quick and simple interface to functionality and content for a site. This module is for site owners, not general users, but it enables the quick creation of a unique interface on a particular site, including multiple pages.



My talk on the African Virtual Open Initiatives and Resources at the Digital Freedom Expo
867 days ago

Here is a video of the talk I did about the AVOIR (African Virtual Open Initiatives and Resources) project at the Digital Freedom Exposition, held at the University of the Western Cape in Cape Town in April 2007. One of the many dreams I have is to rebuild something like AVOIR to help promote sofware engineering based on Free Software (open source) in Africa. AVOIR or an AVOIR-like initiative could also raise understanding of the concepts of digital freedom, idea capital (or software colonialism as a colleague calls it), as well as promoting the free sharing of knowledge and ideas about technology to help contribute to development and the achievement of some of the millenium development goals.

With synergy, we can do anything.



Tags for this post
This post has not been tagged

Life in the cloud - a month in one day
867 days ago

Cloud imageMy company, Kenga, provides development and support services for Chisimba, which I think is the only made in Africa Free Software (open source) development framework. We host our sites and our development environment in public cloud, mostly with Rackspace. We have, however, started to experiment with a new cloud provider called Digital Ocean. They are a New York City based startup, and have a good model that seems like it would work well for the SMME sector, as they charge a fixed rate and don't charge for bandwidth. This also seemed good for me, for my personal dkeats.com site, so when I moved it from Paul Scott's infrastructure, I moved it there. I was down for nearly two weeks in this process as I had to figure out a lot of things, including how to recover my own domain name from a previous service provider.

Yesterday, this site went down when I attempted to upgrade the virtual server from 256 to 512 Mb of RAM to cater for an inrease in traffic (hopefully people reading this blog, not hackers!!).  Nothing I tried would bring it back, there was a bug in the Digital Ocean ticket system, there was no other way to contact them, so I panicked and moved a lot of stuff back onto Rackspace. The one thing I like about Rackspace is that they have fanatical customer service, the best I have ever encountered from any company anywhere in the world.

I made lots of rants about fly-by-night cloud operators, but actually Digital Ocean is a startup. Kenga is also a startup. Startups need to support one another, so we can get over the hump together. So I will stick with them, keep dkeats.com there, and a couple of other sites, and hope for the best. I will also continue to use them for short-life demo sites that are less mission critical.  Most importantly, the http://idlelo.kengasolutions.com site is there, and we are using that for three days of Chisimba training in Abuja, Nigeria next week. Hold thumbs!!

It turns out that there was an issue with Grub in the version of Ubuntu that they were using in their images. dkeats.com was the only virtual server that I had not upgraded to Ubuntu 11.10, and the only one that I have rebooted, but I am not going to dare upgrading dkeats.com now until I get the fix for the grub issue.

Just thought I would post this to explain the downtime on this site yesterday, as well as to show just how much a geek I am!

 

Speaking of clouds, here is a time lapse video I made last year of clouds over Johannesburg.

 

Happy hacking, as RMS says....



How is Free and Open Source Software like academic work #FOSS #opensource
867 days ago

This is the third of a series of posts about Free and Open Source Software (FOSS) that arise from questions that I have been asked in various FOSS intitiatives in which I have been involved. Some of these were initially posted at the KIM Blog at Wits University, and I thought it useful to redo them here so they are available more widely.


I was recently asked
You said FOSS should come easy to academics because doing FOSS is like doing academic work. How do you explain that assertion?

Fundamental to good FOSS projects and academic work is the concept of peer review, others with similar interests and backgrounds taking a look at your work and helping to improve it.  Both FOSS and academic work are quality-assured by peer review, sometimes formally and sometimes informally.

Both academics and FOSS practioners share knowledge through collaboration in communities of practice. In these communities, the participants are allocated merit based on their contribution of outputs and their work in the community. A researcher who writes lots of papers in good journals, or a developer who writes lots of really good source code will both be highly respected. They will be respected even more if they take part in the formal and informal activities of their communities.

FOSS developers share software source code through repositories and ideas through mailing lists, while academics share actual lab protocols and other knowdedge through exchange visits, sabbaticals and increasingly to online communities that are not unlike those of FOSS developers.

FOSS developres collaborate through joint contributions to source code, while academics collaborate through joint research and publication. While the details are different, the principles are quite similar in both. I know this because I have been an active and highly collaborative researcher who has published over 80 publications, and I have been an active developer who has contributed source code to FOSS projects. I see no obvious difference between the two kinds of collaboration, other than the objects around which the collaboration happens.

Both academic work and FOSS development thrive on open communication among peers. To me, who works in both, there is little difference between sending an email to Bill Woelkerling in Australia and asking if he thinks my coralline algal specimen is Hydrolithon nased on the coceptacle photograph, and sending an email to a developer list asking if anyone can think of a way to use jQuery to manipulate embedded Flickr images after the page has loaded. The principle of open sharing is common to both.

Coralline algal conceptacle of Hydrolithon Snippet of code to process a Flickr image after the page loads

 

Both academics and FOSS developers building on the work of others made publicly available. For developers, this may include the actual source code, whereas with academics such editing would be called plagiarism, although in the world of Free and Open Educational Resources, editing content can also be done for academic purposes.

Both FOSS practioners and academics engage in the mentoring of novices. For academics, this includes the supervision of graduate students and assisting junior colleagues. For developers, a wider variety of mentoring options are available.

Finally, both developers and academics claim freedom, for developers it is software freedom, while for academics it is academic freedom. Software freedom involves the freedom of software users (including other developers) to access the software and do things with it, excercising their freedom of choice within certain constraints. Academic freedom has many definitions, one of them being the freedom of inquiry. Also linked to academic freedom is the idea of being free to teach or communicate ideas or facts without being targeted for repression, job loss, or imprisonment. Software freedom advocates are often academics, and use academic freedom to speak about software. The chief difference is that software freedom can be protected by licenses, while academic freedom and its bounds are contested areas.

In another post, I will look at the implications of software freedom in more detail using a graphic example of the difference between free and proprietary software.



Is there a priori evidence that FOSS trumps proprietary software on quality? #FOSS #opensource
876 days ago

This is the second (originally seventh) of a series of posts about Free and Open Source Software (FOSS) that arise from questions that I have been asked in various FOSS intitiatives in which I have been involved. Some of these were initially posted at the KIM Blog at Wits University, and then reposted on this site before I move the dite and decided to clean up old posts. I am reposting it here because it is still valid and might come up for other FOSS advocates or businesses.

I was recently queried about whether FOSS trumps proprietary software, which to me is a question that is difficult to answer out of context. The question was:

Is there a priori evidence that FOSS trumps proprietary software on quality or vice-versa?
This question was asked with considerable animosity and prejudice, without the questioner realising that he was confounding the general and the particular.

ThAce of hearts for FOSSe word 'trump' has so many potential interpretations and I am uncertain what was implied by it, and it is the kind of question that is often associated with bouts of  FUD (fear, unvertainty and doubt). But it was a question that has been asked, so I will try to offer an answer. If anyone has any other ideas to help answer it, please comment or leave them in the Facebook comment block that you will see if you open this post on its own.

An important aspect of this question is the use of the term 'a priori', something which can be generally considered as known 'before the fact' and independent of experience.  The opposite, 'a posteriori' knowledge or justification makes reference to experience. Here I will interpret 'a priori' as equating to something that is generally known, and for which he evidence is well established. The word "trump" derives from "trionfi" or "triumph" so by 'trumps', I will interpret it to mean 'is significantly better than'.

Taking the general interpretation then, perhaps the question means to ask 'Is there a priori evidence that ALL FOSS is better than ALL PROPRIETARY software'. Nobody has, to my knowledge, made such a claim in the abstract. There is some really terrible FOSS software, and some really great FOSS software. There are some really terrible proprietary systems and some really great ones. While it has been shown from source code analysis and other research techniques that FOSS does tend to lead to higher quality code (to the very extent that this is measurable with source code being secret for most proprietary software), I doubt that you could progress from the particular to the general in this way.

There is another way to look at this, and that is from the perspective of a FOSS ecosystem. A FOSS ecosystem consists of the following:

  • A large number of FOSS software tools and projects
  • The communities that support them, including a large number of private sector companies, institutions, and individuals
  • The people within the institution who can do one or more of the following: create, modify, install, adopt, document, train or use FOSS tools
  • The maturity of an organisation towards FOSS tools and projects, as well as its processes for selecting software for its use, including the FOSS-friendliness of its prucurement policies, processes and practices
  • Companies that provide support and possibly 'value-added' services around FOSS technologies.
     

So the question might mean, generally speaking, is a FOSS approach better than a proprietary approach. The answer, once again, cannot be given out of context, because it depends on the degree to which an organisation has established its ecosystem, in other words its FOSS readiness. ElHag & Abushama [1] charactarise maturity a number of areas, including human, technical and general readiness factors.

In the abstract, I would argue that a FOSS approach is better for institutions in general, but as someone responsible for IT in an institution, there will always be cases that taken in context will demand a proprietary approach or solution. Where FOSS and proprietary integrate best is when the architecture of the applications allow them to be integrated, and this is best achieved through a services orientation and adherence to open standards. An important aspect of why I believe FOSS 'trumps' for institutions is the avoidance of vendor-lockin, and the financial and other damage caused by forced upgrade paths.

If an organisation creates a FOSS ecosystem, it is likely to have at least as good a quality source code, AND it will be free from the kind of vendor damage that is not uncommon with large license vendors. Anyone who has been responsible for IT in an organisation is well familiar with vendor damage. Of course, this will only remain true as long as the organisation does its best to stay on that path and ensure that it creates plenty of synergy. The minute it fails to create synergy, and falls into a silo mode, many of the most valuable benefits of FOSS will be lost.

If the idea of 'TRUMPS' refers to existing applications, then there are some where the FOSS version is better, there are some where the proprietary application is better, purely in terms of fitness for achieving its intended purpose.

But even the word 'better' is itself context sensitive and subjective. For example, I use the Gimp, which to me is better than Photoshop (which I have used for many years), even though Gimp does not have all the features of Photoshop. I can install Gimp on as many computers as I want a few clicks on my software's package management system. For me, Gimp definitely trumps Photoshop. However, there are lots of people who would say the opposite, despite the need to spend substantial money to obtain permissions to install it and the more difficult installation procedure. These different views are irreconscilable, as they relate to personal preference and what will satisfy an individual's need and provide for other components of satisfaction.

For an institution, where vendor damage and excessive upgrade costs are so common, there is absolutely no doubt whatsoever that building a FOSS ecosystem TRUMPS building a proprietary one. But of course, like all ecosystems, there will be flow among them, and FOSS and proprietary technologies will be with institutions for the foreseeable future, and institutions just have to have people who are smart enough to know when to make appropriate choices. Let me repeat this: institutions just have to have people who are smart enough to know when to make appropriate choices.

For me personally and professionally, FOSS trumps almost all proprietary technologies. I have used almost nothing else for the past 11  years, and I still have not developed flesh eating bacteria, had fingers fall off due to leprosy, or been unable to do something that I have needed to do with my computer. And I have not spent a penny on software, nor have I had to deal with virus infestations.

ATTRIBUTION: This article makes use of

 

REFERENCES

ElHag, H.M.A. & Abushama, (undated) Migration to FOSS: Readiness and Challenges. http://www.itrc.sd/foss/papers/Migration%20to%20FOSS_%20Readiness%20and%20Challenges.pdf



Tags for this post
This post has not been tagged

Does FOSS mean costless? #FOSS #opensource
877 days ago

This is the first of a series of posts about Free and Open Source Software (FOSS) that arise from questions that I have been asked in various FOSS intitiatives in which I have been involved. Some of these were initially posted at the KIM Blog at Wits University, and redone on dkeats.com before I moved the site. As part of the recovery of old posts that still have some validity, I am reposting it here.
One of the questions raised during a presentation on FOSS at Wits in 2009 was:
Does FOSS mean costless?
The short answer is "Depends!"  FOSS means costless only in terms of the license, and access to the code or a compiled version of it.

For me, as an individual user of FOSS desktop systems, the answer is definitely "yes" and most emphatically so. I use an operating system that is without cost to me, I use office software (word processor, database, vector drawing, presentation, spreadsheet) that is definitely without cost. I use an awesome graphics application called 'the GIMP' that is without cost. I watch videos and listen to music using applications that are without cost to me. I browse the web on software that is without cost to me. I create animations and videos using a variety of software tools that are without cost to me. I manage my extensive photo collection using software that is without cost to me.  I do software development as a hobby, and all the tools I use for development are without cost to me. My daughter uses the same system, and plays games that are without cost to her, and also does her school work with software that has no cost. Indeed, there is almost nothing that I can think of that I need to do for fun or work that would require me to pay a software license fee. So, yes, for me as an individual, FOSS is certainly without cost. Indeed, I once make a mapping of the software I use against proprietary packages, and I have in excess of R300 000 worth of software (based on full commercial prices) on my computer, for free.


Layers in a FOSS
ecosystem.

If we look at the diagram left, we can see a stack of different ways of using or working with FOSS. While actual costs are case specific, there would be a reasonable expectation of increasing costs as one moves higher in these layers.

Thus, the reason the cost to my daughter and I is zero is that we are able to use existing software, as is, in the form in which it is supplied. This is the most trivial way in which software is deployed, although it is an important one. Most real problems in large organizations cannot be solved using simple deployment of software as is, with no customization. This applies to both FOSS and proprietary software.

In a typical, large software project, licenses are an important and recurring cost, but do not generally consititute more than 25% of a typical project implementation, often less. Thus, the cost of implementation of large systems will be similar for both proprietary and FOSS systems, unless one area requires more customized code to be written than the other. This cannot be addressed in the abstract, however, the desire of the ignorant notwithstanding.

A very important cost impact of proprietary licenses lies in vendor lock-in and consequent exit costs. Exit cost for proprietary software can be so significant that organizations may be reluctant to exit even when there are compelling reasons to do so. The cost of Wits exiting the Oracle Student System, a typical proprietary, tightly coupled system are likely to be double digits, without even including the costs of implementing the alternative solution. Let me say that again, because it is so often overlooked: the exit costs are huge, and the time pressure adds additional costs and constrains our choices severely.

There will be times when a FOSS solution to a given problem will be more expensive than a proprietary solution when taken from the perspective of a particular project. However, measured in terms of contribution to building a broader ecosystem, such extra costs may be justified; or they may not be. Decisions in this case will require a full examination of the costs and benefits, both long and short term, and an application of wise minds - just as you have to do with any software project whether based on FOSS or proprietary software.

When an organization is undergoing change from predominantly proprietary to substantially FOSS (for example), there will be a typical change or pain curve. It may cost more in the short term to implement FOSS than to go with the 'standard' proprietary solutions. Sometimes - due to lack of understanding of the nature of pain curves - organizations change back to the old way before they have emerged from the pain curve, and never realised the long term value of building a FOSS ecosystem.

When you measure the total cost over the life of a system, in general FOSS should come out cheaper, but there is no general guarantee that this will be true in the abstract. Most costs in a software project are part of implementation, and it is possible to have very expensive implementations of either a FOSS or a proprietary technology. The cost impact will be determined by the degree to which you have created or have access to an ecosystem of support. This ecosystem will consist of:

  • internal skills
  • companies providing support
  • local and global communities.

Over time, with adequate support, FOSS will reduce costs for most areas. However, there will be some for which it will not. Therefore, it is vital to evaluate each case on merit and to do so skillfully and with knowledge and understanding of all the nuances. In this respect, FOSS is no different from any other technology acquisition. It is certainly not costless. And how much it reduces costs will be highly dependent on what you do with it. You might, for example, discover that you can innovate more with FOSS and achive higher value, and therefore your costs may increase along with value.

Indeed, the notion of cost in the abstract is not really very useful. There are a number of business models for FOSS in an organization, and they are all different, with different cost impacts, and the costs vary with the nature of the project, the availability of in-house skills, the degree to which the principles are understood and embraced, and the availability of external resources that can be called on when needed.

The same range of business models is true for proprietary software, so the only way to compare costs is to have exactly identical projects under exactly identical conditions. In such a situation, FOSS should be somewhat to significantly less costly because it lacks the license fee. However, such a situation is almost impossible to imagine creating in reality, so discussions of costs in the abstract are really a distraction. Until you have an actual project, and its implementation ecosystem, cost is not something that you can meaningfully consider.

I have always maintained that any savings from implementing FOSS are collateral benefit, the metaphorical cherry on the cake. Likewise, the Joint Information Systems Committee (JISC) of the UK concluded that the real value of FOSS arise out of the options and flexibility that it brings. They conclude:

In fact, the real value of OSS is that it makes it possible for you to exercise control over how you run your institution's IT department by allowing you to choose a model from on any point on the spectrum that runs from fully self-supporting to fully outsourced. In turn, this allows institutions to choose the extent to which they want, and are able to, take advantage of the strategic organisational  gains that accrue from the use of open data standards and open source software.

http://www.oss-watch.ac.uk/resources/procurement-infopack.xml#ixzz0r0KMuovv
Under Creative Commons License: Attribution Share Alike

My main point here is that whether FOSS is without cost or not depends very much on particular cases. Cost is one side of the cost-benefit equation. Value arises when benefits are greater than costs. Whether this is true for any given software project is not a function of FOSS versus proprietary, but how well you execute the project, and how long you can sustain the benefits. The question of cost is meaningless in the abstract, though very important nevertheless.