Cloud computing by wynpnt on Pixabay (CC0)

In the second week of #EL30 we explored the topic of Cloud. Stephen begins by introducing the idea:

The joke is that “the cloud” is just shorthand for “someone else’s computer.” The conceptual challenge is that it doesn’t matter whose computer it is, that it could change any time, and that we should begin to think of “computing” and “storage” as commodities, more like “water” or “electricity”, rather than as features of a type of device that sits on your desktop.


On initial reading I found this a difficult concept to wrap my head around. How could I consider cloud computing more like water? I found it a little difficult to interpret it in that way. However, the more I learned about the topic from the resources, the weekly activity, from reading fellow participants blog posts, and from my own research, the more that Stephen’s words carried greater weight and meaning for me.

One of the defining features of cloud computing is that it can be an on-demand self-service – “the cloud is a form of utility computing”. And like a utility service, take for example electricity in our homes, we can choose a provider, sign up and create an account with them and use the service whenever we need it, as much or as little of it as we can or want. We’re then billed by the provider for the extent of our usage of that service. The utility or commodity comparison makes clear sense in this regard. One distinct advantage that the cloud possesses is that by placing our data and the services we use on the cloud they becomes accessible to us from virtually anywhere, not just at home but on the train, in the office, and on any device that can access the web.
#EL30 Week 2 conversation with Tony Hirst

In the #EL30 guest conversation this week, Tony Hirst, Senior Lecturer in Telematics at the Open University, UK, spun up a virtual server using Digital Ocean into which he installed a pre-built Docker container and ran the Jupyter Notebook application. The Jupyter web app facilitates creation of a shared living document. Manipulations of the programming code (the input) can be run and instantaneously outcomes (the output) can be viewed. Stephens points us towards the idea that

These new resources allow us to redefine what we mean by concepts such as ‘textbooks’ and even ‘learning objects’.


It took Tony a matter of seconds to do this at a cost to him of approximately £0.03 per hour of usage. Tony, in England, communicated the IP address for the web application auto-generated by Digital Ocean in the form of a URL to access the container that he had spun up to Stephen, in Canada. Half way around the world, Stephen browsed to this URL address and was able to access the application along with Tony. Essentially, the cloud made it possible for them to share a computing service over the internet.

In terms of its application to eLearning, cloud offers some powerful opportunities and benefits: virtual sandboxes could be easily created to test out proprietary software on (or any software) at much reduced costs to both educators and students in a variety of disciplinary contexts; through removal of the barriers of device, OS, particular device configuration etc., anyone with a web-enabled device and internet access could begin working independently, with peers, teachers, or colleagues, within the specific online learning environment of choice.

In particular, anyone who has used or is familiar with some Linux OS distributions will be familiar with package installation and the need to have dependencies installed in order for the package you want to work properly. Spinning up a virtual server to host a pre-built container including all necessary code and dependencies to support the application(s) you want to work on with students or colleagues, removes the problematic support requirement to troubleshoot  the myriad of different devices, OS’s, configurations, dependencies and versions that each individual person’s device can have. The cloud overcomes this – everyone starts off singing from the same hymn sheet, the same shared online computing environment.

Docker – Containerized Applications

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.


An aspiration that Tony felt would be worthwhile for institutions to work toward is that of developing institutional clouds – for more institutions to begin to offer Docker machines (or similar) that staff and students could log into via URL from their personal desktop computers. These machines run on institutional servers, could, for example, host a multiplicity of Docker containers or cluster containers catering for an extensive array of disciplinary and trans-disciplinary subject matter that interfaces through a web browser. Ideally it could be mounted to each individual’s own file store so that items could be launched and saved back to the file store. Currently, using the Digital Ocean virtual server setup described above, it’s not possible to save work within the Jupyter Notebook running within the Docker container. Work done during a session is destroyed upon exiting the application unless it is downloaded in the specific file format (.ipynb) that can be re-uploaded to continue working on it the next time the containerised application is being used. 

Here’s a snapshot of Tony’s ‘showntell’ Github respository. Notice that he has branches with different Jupyter Notebooks for astronomy, chemistry, computing, electronics, engineering, linguistics, etc. Same application, multiple trans-disciplinary uses and users.

In a context where I am reading more and more about ownership of our own data and self-hosting as a possible means of reclaiming digital identity, Keith Harmon set a interesting and intriguing scene over on his blog:

As a complex system myself, I self-organize and endure only to the degree that I can sustain the flows of energy (think food) and information (think EL 3.0) through me. The cloud is primarily about flows of information, and the assumption I hear in Stephen’s discussion is that I, an individual, should be able to control that flow of information rather than some other person or group (say, Facebook) and that I should be able to control the flow of information both into and out of me. I find this idea of self-control, or self-organization, problematic—mostly because it is not absolute. As far as I know, only black holes absolutely command their own spaces, taking in whatever energy and information they like and giving out nothing (well, almost nothing—seems that even black holes may not be absolute).


Keith provides a deeper insight into his perspective and extends an invite to his readers to journey with him outside, where he likes to think about these kind of discussions.

It helps me to walk outside for discussions such as this, so come with me into my backyard for a moment. The day is cool and sunny, so I’m soaking in lots of energy from sunlight. I’ve had a great breakfast, so more energy. I’ve read all the posts about the cloud in the #el30 feed, so I have lots of information. Of course, I’m pulling in petabytes of data from my backyard, though I’m conscious of only a small bit. Even with the bright light, I can see only a sliver of the available bandwidth. I hear only a little of what is here, and I certainly don’t hear the cosmic background radiation, the echo of the big bang that is still resonating throughout the universe. I’m awash in energy and information. I always have been. Furthermore, I can absorb and process only a bit (pun intended) of the data and energy streams flowing around me, and very little of this absorption is my choice. Yes, if the Sun is too bright, I can go back inside, put on more clothing, or put on sunscreen, but really, what have I to do about the flow of energy from the Sun?


I’m going to take some time to mull over thoughts of being
truly able to control the flow of information both into and out of me. It’s an intriguing question.


If anyone is seeking to implement Jupyter Notebooks into their practice, Tony has authored a resource entitled “Getting Started With Jupyter Notebooks for Teaching and Learning”, which might be useful.

If you’d like to test-drive Jupyter for yourself you can do so at

If your HEI provides Microsoft O365, your credentials should be able to log you into to run notebooks from.

Google also offer a notebook environment at which includes integrated storage within Drive.

Image of Data by xresch on Pixabay (CC0)


Week 1 of #EL30 addressed the topic of Data. Within that, two core conceptual challenges relating to eLearning were explored, “first, the shift in our understanding of content from documents to data; and second, the shift in our understanding of data from centralized to decentralized.”

All of this exists within the backdrop of “what is now being called web3, the central role played by platforms is diminished in favour of direct interactions between peers, that is, a distributed web”. The topic of data is relatively new to me and I am figuring much of it out as I go.

Our data exists online across multiple distributed nodes and each of us embodies the unique identifier that links all of this data together. In Stephen’s week 1 data summary article he highlights how digital data is beginning to permeate many aspects of our lives – “We are beginning to see how we generate geographic data as we travel, economic data as we shop, and political data as we browse videos on YouTube and Tumbler. A piece of media isn’t just a piece of media any more: it’s what we did with it, who we shared it with, and what we created by accessing it.” The traces of data we leave behind of where we’ve been online creates a depiction of us for those that can see it, an online identity, from breadcrumbs in the digital woods.

Activity – Conversation with Shelly Blake-Plock

Week 1 conversation with Shelly Blake-Plock, Co-Founder, President and CEO of Yet Analytics

The week 1 conversation with Shelly Blake-Plock, Co-Founder, President and CEO of Yet Analytics covered a range of interesting topics. Discussion ebbed and flowed and touched upon concepts such as

  • using data in actionable ways to understand learners, to improve instruction and content and to manage data systems that support learning,
  • the Experience API (xAPI) specification,
  • the xAPI enterprise learning ecosystem,
  • Learning Record Store (LRS),
  • data ownership and management,
  • identity management applications,
  • the privacy trade-off of these systems.

There was good discussion around Experience API, commonly abbreviated to xAPI, a modern specification for learning technology that helps to turn learning activities, experiences and performance into data. Shelly was the Managing Editor of the IEEE Learning Technology Standards Committee Technical Advisory Group on xAPI (TAGxAPI) who created a technical implementation guide for xAPI.

Essentially, xAPI was created as a way of tracking learning experiences and performance that extends beyond the bounds of our traditional Learning Management Systems (LMS) and the content and activities that learners launch from within them. It facilitates an individual’s learning to be recorded and moved more freely from siloes such as the LMS, as long as it in xAPI format or can be converted to it. The notion is that learning occurs everywhere, it’s not simply confined to the LMS or to the classroom, and now it’s possible for the data generated from learners’ experience and performance (online and offline) to be tracked and sent via x API statements (signals) from a range of different origins such as mobile apps, simulations and games, and the physical world through wearable technology, sensors and online games.

With this data it becomes possible to analyse and understand how learners are learning and potentially improve the content and activities that they receive. xAPI statements about learning experiences can then be hooked up via a number of launch mechanisms to a Learning Record Store (LRS) to collect reams of data about how the learner interacts with their learning environments. Analysis of this data can be automated through machine learning algorithms depending on what type of information is being sought.

How xAPI works with the LRS


Most of us have likely become familiar with the term ‘surveillance capitalism’ as the purported business model employed by many web2 corporations and platforms. Online data generated by each of us (our digital footprint) is already bought and sold to online advertising and marketing agencies. We unwittingly and nonchalantly give our ‘consent’ to it by clicking agree to the terms and conditions of the seemingly ‘free’ online platforms and services we sign up to.

The ‘business model’ is explained early in this presentation by Laura Kalbag of

Laura Kalbag speaks about indie design at WordCamp, London

When viewing all of this through a critical lens, talk about tracking and gathering learner data for analysis immediately brings with it the need to talk of a range of considerations around ownership, ethical use, privacy, security, and data governance. I’ve noted similar sentiments from many of my fellow #EL30 participants.

The use of learning analytics to support the student experience could afford valuable insights, but there are ethical implications associated with collection, analysis and reporting of data about learners.

According to Rebecca Ferguson (2012), “Learning analytics is “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs.”

JISC UK’s Code of practice for learning analytics, authored by Niall Sclater and Paul Bailey, provides very helpful guidance in this regard beneath 8 key headings identified to help institutions (and possibly other organisations) understand and carry out responsible, appropriate and effective analysis of the data that they gather:

  1. Responsibility
  2. Transparency and consent
  3. Privacy
  4. Validity
  5. Access
  6. Enabling positive interventions
  7. Minimising adverse impacts
  8. Stewardship of data

Niall Sclater also compiled a literature review of the ethical and legal issues for this code of practice, in which he collates some critical ethical questions from a diverse literature authorship in relation to many of the areas identified in the code of practice. Here’s a snapshot of some of the thought-provoking questions posed in that review:

Ethical questionsCode of Practice area
1Does the administration let the students/staff know their academic behaviours are being tracked? (Hoel et al., 2014)Responsibility
2Does an individual need to provide formal consent before data can be collected and/or analysed? (Campbell et al., 2010)Transparency and Consent
3How transparent are the algorithms that transform the data into analytics? (Reilly, 2013)Validity
4Who can mine our data for other purposes? (Slade & Galpin, 2012)Stewardship of data
5Who is responsible when a predictive analytic is incorrect? (Willis, Campbell & Pistilli, 2013)Privacy
6Does [a student profile] bias people’s expectation and behaviour? (Campbell et al., 2010)Minimising adverse impacts

On the #EL30 course I’ve read a bit about IndieWeb, a community based on the principles of owning your own domain and owning your own data. IndieWeb attempts to make it easy for everyone to take ownership of their online identity and believes that people should own what they create. I definitely want to explore this further in light of the next generation of learning technologies.

I’m probably a little late to the eLearning 3.0 MOOC (#EL30) party, nonetheless, I’m hoping to try to avail of this opportunity to learn from Stephen Downes’ MOOC and from the network of experienced people created from the Connectivist learning approach that he employs (more info. here and here). It’s already clear to see a diverse, energetic, knowledgeable network emerging in the course feeds area and I hope to contribute to this community where I can. At least this blog post should allow me to submit my feed for RSS harvesting!

An introductory article by Stephen entitled ‘Approaching E-Learning 3.0‘, had me immediately hooked:
“If you’re reading this, then this course is for you. You’ve demonstrated the main criterion: some degree of interest in the subject matter of the course.

I’m certainly interested, so let’s give it a go!

The focus of #EL30 will be to explore key domains that Stephen envisages within the next generation of distributed learning technology. The main topics being explored are laid out in this image.

#EL30 Topics
#EL30 Topics

In the presentation Stephen gave to launch #EL30, he rounds out the detail of each of these topics and considers the impact of the next wave of emerging and distributed learning technologies:

#EL30 launch presentation by Stephen Downes

By way of a quick introduction, I work as Technology Enhanced Learning Manager in Graduate & Professional Studies at the University of Limerick, Ireland. I’m involved in the design and production of flexible online and blended programmes and research of same, and on shaping related institutional structures and processes.

I’m interested in open and online learning, educational technology, instructional (learning experience) design, technology in general, and all associated literacies. I’ve been thinking about establishing my own web presence for some time and participating in this MOOC has given me the impetus to go and do it.

Already an abundance of topics have piqued my interest – linked data, web (re-)decentralisation (see SoLID, created by Sir Tim Berners-Lee), IndiewebgRSShopperWebmentionsthe Fediverse, RSS aggregation and harvesting, and many more.

I look forward to exploring and understanding them in greater depth.