#EL30 – Resources Task

This week, Stephen tasked us with creating a Content-Addressed Resource on the distributed web or Dweb.

Create a resource (for example, a web page) using IPFS, Beaker Browser, Fritter, or any other distributed web application (see some of the examples here). Provide a link to the resource using any method you wish.

To help prepare for this task, watch the video ‘From Repositories to the Distributed Web‘ as well as these videos on IPFS and Beaker: installing IPFS, making a website with IPFS, installing Beaker.


The final results

The Dweb?

Dweb – The letter D in the popular shortened version stands for either decentralised or distributed. Here’s a visual depiction:

Image Source

The conceptual framework behind the creation of a decentralised or distributed internet is an attempt to replace, improve upon, or at least run parrallel to, the current centralised web (based on the web 2.0 premise). A feature of this current web is siloed platform specific engagement.

In part, this shift in thinking has transpired as a strong reaction to the centralisation of control and power in the hands of a few giant internet and tech companies. A decentralised, distributed web would, in theory, more evenly spread the balance of power, control, etc. across a network of much more people, in the hope of removing the reliance on using web 2.0 platforms to communicate with one another, thus diluting their influence.

Further, the dweb has been brought about by the need to create an immutable way to preserve the largest open and accessible collection of human knowledge ever created.

Here’s how I did it

My laptop screen while installing and initialising IPFS

Following Stephen’s video’s (linked to above) was very straightforward. They are very clear and thorough. The only real hurdle I had to overcome was that I was performing the operation on my Mac laptop rather than on a Windows device. All in all, performing the operations was much the same.

Downloading and extracting IPFS

The first task I set about was downloading and installing the go-ipfs distribution implementation of the Inter-Planetary File System (IPFS).

I unzipped/extracted the go-ipfs download into my home directory (davidmoloney$ in my case). It isn’t as easy as double-clicking on the .exe file in the directory and following an installation wizard I’m afraid! You are the installation wizard!


I opened up the terminal application on my Mac to begin the process of installation. First things first, I changed directory into the new go-ipfs one which, if you remember, I had saved to my home directory.

cd \go-ipfs

In Stephen’s video, in order to list the subdirectories within the go-ipfs directory on his Windows device, he types:


into the Microsoft Powershell. On the Mac terminal the equivalent command is

Running the ls command in Mac terminal within the go-ipfs directory

You could also type

ls -l

if you were looking for some further detail.

Running the ls -l command in Mac terminal within the go-ipfs directory


It’s the “ipfs” file listed on the right hand side of the “ls” screenshot above that I am initially interested in.  I want to initialise this file in order to establish my IPFS node. To do this this I type

../go-ipfs/ipfs init

Initialising the ipfs file generates a hashed peer identity for my IPFS node on the distributed web. It also creates a link for me to progress to the next step and open the Readme file.

I copied the hashed value that had been generated in the terminal (by highlighting it and using Command+C) in order to open the Readme file, also ensuring to copy the


code that comes immediately before the hashed string. Stephen notes the importance of this in his video. Without the “cat” prefix piece the command will not run properly.

I typed and pasted the following command into the terminal to open the Readme file. You will replace the *ABCD* section in the code below with the hashed public key that is generated for you (although I think it’ll probably be the same anyway – ending in “Vv”).

../go-ipfs/ipfs cat /ipfs/*ABCD*/readme

The Readme.md file should open for you within the terminal.

Initialising IPFS via the Mac terminal and opening the Readme file

Starting the daemon service

Once initialised it is important to get the IPFS service up and running by starting the daemon service

../go-ipfs/ipfs daemon

I love how Stephen pronounces daemon in his video – demon! Very Irish!

The IPFS Companion Add-on for Firefox

Ordinarily, it seems that by then browsing to the web address produced after running the daemon service ( I should be able to access IPFS via my browser. However, without the IPFS Companion add-on, I couldn’t get to that point using Firefox.

Once the IPFS Companion add-on is installed you can click the small icon in the browser bar and then select the Open Web UI option to view a dashboard interface.

IPFS Companion add-on with option to “Open Web UI”
IPFS companion showing a connection to 496 peers
Dashboard UI of my IPFS node and distribution of peers chart
The geographical distribution of peers

Creating and hosting my simple website on IPFS

I followed Stephen’s guidance in his video and used Gio d’Amelio’s quick tutorial to get my simple website set up and hosted on IPFS.

Using the Sublime Text 3 text editor for code, I copied and pasted the sample text provided in the quick tutorial and created basic index.html and style.css files. I saved the files in a subdirectory that I similarly named “ipfssite”, following Stephen’s lead.

Adding my site to IPFS using the terminal

I opened a new terminal window and changed directory into the go-ipfs directory within my home directory again

cd \go-ipfs

From here, I ran the following command to add the .html and .css files within my ipfssite subdirectory to my IPFS node.

./ipfs add -r ipfssite

Doing this produces hashed values for all of the files and also the entire site. The very last hash value before the command completes is the hash value to use to browse to your IPFS site.



And tada!

Finally, I decided to install the beaker browser and establish a new .dat website also. This post is already a little unwieldy so I won’t detail how that went, but it was all pretty straight forward. Ultimately, I simply followed along with Stephen’s video instruction which was excellent.

The first version of my .dat site on Beaker browser

Featured image, Universe, by geralt

#EL30 Week 4 – Identity

The world changes. Some people don’t.
You learned things that were true back then, but now they’re false.
You got successful doing things one way, but now that way is moot.
You still consider yourself an expert, but that expertise has expired.
You dug so deep into something that you lost perspective, and didn’t realize the landscape had changed.
Sometimes it’s just a change in situation. The strategy that got you to where you are is different from the strategy that will get you to where you want to be next.


In #EL30 this week the focus was Identity.

This post contains more questions than answers, more randomly assorted out-loud thoughts than anything else. I’m prepared to be ‘not quite there’ in my interpretations of much of this. It’s all a work in progress, ironically.

Identity is a deep and complex topic and one that could be discussed in a variety of different ways. It is both a personal (internal) and social (external) construct. It isn’t solely what we think of or communicate about ourselves, our self-image, but what others think of and communicate about us also. Consideration of identity from a psychological perspective through the work of Carl Rogers can incorporate both aspirational and fantasy elements. We see this more often nowadays with people on social media portraying a projected sense of self or a more ideal version of themselves through their publicly broadcasted social media, and other people providing their impressions about that through liking, sharing, following, friending, etc

For me, identity is more a perpetual interplay of elements within different contexts rather than a finished product; it’s also more the plural than the singular. Our self-concepts about our identity are likely to change as the world around us changes and our role changes within it. Identity is never complete, it is ever in-process. Who we are and what we do is multi faceted, changeable, and imperfect. And my understanding of it is much the same.

The essence of identity might refer to the type of person we are recognised as being, both internally and externally, at a certain point in time. The term being is inclusive of the type of person we were in the past and the one we might become in the future also. 

Over on Jenny Mackness’ blog, she wrote the following piece,
quoting renowned social learning theorist, Etienne Wenger, which really resonated with me.

It is not just what we say about ourselves or what others say about us. It is not about self-image, but rather a way of being in the world – the way we live day by day – He [Etienne] expands on this on p.151 of his book, writing:

An identity, then, is a layering of events of participation and reification by which our experience and its social interpretation inform each other. As we encounter our effects on the world and develop our relations with others, these layers build upon each other to produce our identity as a very complex interweaving of participative experience and reificative projections. Bringing the two together through the negotiation of meaning, we construct who we are. In the same way that meaning exists in its negotiation, identity exists – not as an object in and of itself – but in the constant work of negotiating the self. It is in this cascading interplay of participation and reification that our experience of life becomes one of identity, and indeed of human existence and consciousness. (p.151)

Blog Source and Book Source

Conversation with Maha Bali

#EL30 Week 4 Identity – conversation between Stephen Downes and Maha Bali

In this week’s conversation, Stephen explored the topic further with Maha Bali. I was already aware of some of Maha’s work through the work of Dr Catherine Cronin. Stephen and Maha spoke about the composition of identity, whether elements are internal or external, how our activities and our identity relate, and about a number of Maha’s activities, including Virtually Connecting and the ongoing Equity Unbound course.

Maha described identity in a blog post she wrote prior to the conversation as evolving, dynamic, and contextual. In it, Maha spoke about recognition of who and what we are as a fluid concept dependant upon a range of factors – our perception of self, others perceptions, comparative perspectives, the particular time in our life that it is, etc. Personal identity is something that is constantly negotiated. As Maha Bali said in that blog post, her Virtually Connecting co-creation felt like an extension of herself. What she helped to create felt like a part of who she was. The conversation finished on a very interesting note with both agreeing that identity was qualitatively different than the sum of its parts.

Another key takeaway from the conversation was the discussion about choice. We choose to actively take up an identity or choose to identify with something, like being ‘resilient’, and choose not to identify with other things, like being ‘a quitter’. Each of us are selective with knowing what we are, and knowing what we are not. 

“Identity requires some element of choice.”

“Identity is marked by similarity, that is of the people like us, and by difference, of those who are not.”


Digital Identity

Identity and digital identity are not one and the same. Someone without access to the internet still has an identity. In a presentation I’ve given previously entitled ‘Who Am I Online?’, I portrayed digital identity (in particular) using the concept of an identity box. Inside the box is what you think of yourself, your perceptions of all that you identify with – the personal. The outside of the box represents external thoughts about your identity, what you are socially seen to identify with or the parts of your identity that you may not have as much control over shaping, such as the digital footprint created about you from the traces of data you leave behind yourself online by ‘forces beyond our control’.

Identity Box idea. Vinyl Cube by Carson Ting on Flickr.

“If identity provides us with the means of answering the question ‘who am I?’ it might appear to be about personality; the sort of person I am. That is only part of the story. Identity is different from personality in important respects. … an identity suggests some active engagement on our part. We choose to identify with a particular identity or group. … [the] importance of structures, the forces beyond our control which shape our identities, and agency, the degree of control which we ourselves can exert over who we are.


I was attempting to get people to comprehend identity as something that we have control over certain elements of, but our agency with regard to complete control over it is limited.

As part of the session I delivered, I used the Lightbeam plugin for Firefox, linked to below. I explained to the audience that I was starting the plugin at the beginning and that over the course of my 40 minute presentation my browsing behaviour would be captured by the plugin. At the end of the session I displayed the graph visualisation and displayed the kind of identity profile that was being built about me behind the scenes while I had been giving the session. The visualisation listed the sites that I browsed to during the session and also listed the trackers that had been following me from site to site across the web as I browsed, generating an identity profile of me.

Sample screenshot of Mozilla’s Lightbeam from Wikipedia

Further reading and resources

Identity, Keys and Authentication

To view an insightful perspective into the future of identity and online authentication, this video from Stephen Downes explains the concepts of public and private key cryptography and introduces Yubi keys.

At this link, Bonnie Stewart speaks about Digital Identities: Six Key Selves of Networked Publics.

Here are some further resources to inform yourself about the digital traces we all leave behind in online environments and how to begin to counteract:

Featured image by Ben Sweet on Unsplash

#EL30 Week 3 – Graph

For #EL30 this week, the topic of Graph is explored. This blog post will not address our task for this week but will instead capture some of what I’ve been considering about the topic and some excellent resources I’ve found that have helped to shape my thoughts.

The graph (think network, community, ecosystem of connections) is seen as being an important conceptual framework for the movement from web2 to web3. In essence, I conceptualise this movement as the gradual removal of current platform middlemen operating particular business models, with the replacement of that with more direct communication and interaction between one another over the internet.

Graph constructs aid with depicting distributed networked systems. Network science helps us to understand the ways in which these systems operate.

Conversation with Ben Werdmüller

The concept of graph was asserted by Stephen at the outset of the week:

The graph is the conceptual basis for web3 networks. This concept will be familiar to those who have studied connectivism, as the idea of connectivism is that knowledge consists of the relations between nodes in a network – in other words, that knowledge is a graph (and not, say, a sequence of facts and instructions).

Graphs, and especially dynamic graphs, have special properties, the results of which can be found in social network theory, in modern artificial intelligence, and in economic and political theory.


Ben and Stephen delved deeper into this concept during their online conversation and touched upon a possible future that can be derived from the movement to web3.

Stephen described the common traits of a number of network structures and systems.

Some common networks:
– A social network (or social graph) which is made up of people (and sometimes bots pretending to be people) connected by relations of ‘friending’ or ‘following’ and interacting by means of ‘texting’ or ‘messaging’.
– A neural network, which is made up of neurons (or, in computers, artificial neurons), connected by means of axons or connections, interacting by means of ‘pings’ or ‘signals’
– A financial network, which is made up of accounts, which have ‘balances’ of various sizes, and which are connected through contracts and interact through transactions
– A semantical network (such as the Semantic Web), which is a collection of resources connected through an ontology and which interact through logical relations with each other.

In all of these, the core idea is the same. We have a set of entities (sometimes called nodes or vertices) that are in some way connected to each other (by means of links or edges or transactions or whatever you want to call the linkage and the interaction through the linkage).


Graphs help us to recognise the relationship between the actors and how they interact with each other within the environment. It isn’t necessarily the individual objects that we should focus on when conceptualising graphs, it’s the things that anchor the objects together that prove interesting, the connections.

It’s difficult for a graph to tell us everything about a networked system, they can be a very effective visual framework to help us recognise the constituent components of the network and how they all operate, however, it further allows us to begin to recognise elements that cannot be seen with our eyes – the invisible, underlying, subtle and nuanced contacts, properties, connections and interactions – the currents that flow within the graph structure that shape it, it’s electromagnetic force, for want of a better term. The graph itself depicts the physical format of a network but it is what the graph allows us to visibly and invisibly perceive in greater depth is what makes it important.

In connectivism we have explored the idea of thinking of knowledge as a graph, and of learning as the growth and manipulation of a graph. It helps learners understand that each idea connects to another, and it’s not the individual idea that’s important, but rather how the entire graph grows and develops.


An example of a directional graph might be one that looks at the reporting relationships in a hierarchically structured institution or organisation. At the top of the hierarchical structure might sit the CEO or President, beneath that the layer of Vice Presidents, Chief Operational Officers, Chief Finance Officers etc., beneath that layer might sit the directors of divisions or departments, beneath that the managers and coordinators layer and then the employees working within teams. The reporting relationships would commonly flow upward in one direction from the base of the hierarchy, layer by layer, towards the top.

A real life example of a static bidirectional graph might be a return journey flight path. For a flight where a stopover is required before arriving at the desired destination, you would depart from your origin airport, travel via your waypoint stopover, and arrive at your destination destination – let’s say Dublin (origin) to Copenhagen (destination) via Oslo (waypoint). The return journey demonstrates this in reverse.

Examples of dynamic graphs that may have either directional or bidirectional relations between vertices would be the internet, social networks and perhaps a less obvious example might be historical timelines (directional, as time advances). 

The use of dynamic living graphs appear across many of the fields of science, anthropology, psychology, etc. and are particularly common within computer science, machine learning, and artificial intelligence. In computer science we can perceive graphs as Merkle Trees and Directed Acyclic Graph’s (DAGs), used in GitHub software version control.

Some very helpful resources

Coming off the back of last weeks conversation with Tim Hirst, where we touched upon Jupyter Notebooks as a way of actively and dynamically learning and practicing, I thought I’d explore the web for an interactive dynamic simulation of Graph Theory in order to learn a little more about it. I came across a very useful Graph Theory resource created by Avinash Pandey on Github that models 3D graph structures. 

In my online search to better understand this topic I came across another great resource by Nick Case that helped me to think in terms of graphs and the science of networked systems. The resource is an interactive online game called Crowds. In it, Case attempts to deconstruct theories of network science to use as a plausible explanation for the phenomena we know as The Madness of Crowds/The Wisdom of Crowds.

There were a number of key takeaways from this resource for me. Concepts about complex connections were introduced wherein the greater exposure someone has to an idea amongst their social networks, the greater the chance that they will be influenced by it. Threshold factors might also influence whether an idea spreads beyond certain nodes in a network.
The resource further looks at the idea of contagion, explores consensus and concludes by examining the importance of such concepts as bonding within networks, bridging between networks, and the immutable influence of Small World Networks.

Screenshot from Crowds – Bonding, Bridging, and Small World Networks

One of the sources cited in Case’s resource is a book co-authored by Nicholas Christakis and James Fowler called Connected, which analyses the importance and value of connection, of network, of community and the influence each of us can have on it, and it can have on each of us. Here are links to the book and a thought-provoking TED talk on YouTube (18 mins), which gives an insight into the impact of our social networks. From their research they coined the ‘three degrees of influence’ theory, essentially arguing that, even though the influence dissipates, it may be possible for our actions, behaviours, etc. to have consequences and influence up three degrees of separation away from us (our friends’, friends’, friends). This theory is contentious.

In the TED talk, Christakis discusses common objects – a pencil and a diamond – both made from carbon. But the carbon atoms in each are connected and arranged in different ways. The lead in a pencil is soft/breakable and dark, diamonds are hard and clear. Both items have the same underlying properties, however, they are connected in different ways and this effects what they become.

Pencil lead and diamond – both made from carbon atoms, connected in different ways

According to Christakis, those properties (soft, dark, hard, clear)

“do not reside in the carbon atoms; they reside in the interconnections between the carbon atoms, or at least arise because of the interconnections between the carbon atoms.”

Source – timestamp 15.09
Structure of graphite and diamond – Connections matter

“So, similarly, the pattern of connections among people confers upon the groups of people different properties. It is the ties between people that makes the whole greater than the sum of its parts. And so it is not just what’s happening to these people — whether they’re losing weight or gaining weight, or becoming rich or becoming poor, or becoming happy or not becoming happy — that affects us; it’s also the actual architecture of the ties around us. Our experience of the world depends on the actual structure of the networks in which we’re residing and on all the kinds of things that ripple and flow through the network.”


Connections matter.

#EL30 Week 2 – Cloud

In the second week of #EL30 we explored the topic of Cloud. Stephen begins by introducing the idea:

The joke is that “the cloud” is just shorthand for “someone else’s computer.” The conceptual challenge is that it doesn’t matter whose computer it is, that it could change any time, and that we should begin to think of “computing” and “storage” as commodities, more like “water” or “electricity”, rather than as features of a type of device that sits on your desktop.


On initial reading I found this a difficult concept to wrap my head around. How could I consider cloud computing more like water? I found it a little difficult to interpret it in that way. However, the more I learned about the topic from the resources, the weekly activity, from reading fellow participants blog posts, and from my own research, the more that Stephen’s words carried greater weight and meaning for me.

One of the defining features of cloud computing is that it can be an on-demand self-service – “the cloud is a form of utility computing”. And like a utility service, take for example electricity in our homes, we can choose a provider, sign up and create an account with them and use the service whenever we need it, as much or as little of it as we can or want. We’re then billed by the provider for the extent of our usage of that service. The utility or commodity comparison makes clear sense in this regard. One distinct advantage that the cloud possesses is that by placing our data and the services we use on the cloud they becomes accessible to us from virtually anywhere, not just at home but on the train, in the office, and on any device that can access the web.

#EL30 Week 2 conversation with Tony Hirst

In the #EL30 guest conversation this week, Tony Hirst, Senior Lecturer in Telematics at the Open University, UK, spun up a virtual server using Digital Ocean into which he installed a pre-built Docker container and ran the Jupyter Notebook application. The Jupyter web app facilitates creation of a shared living document. Manipulations of the programming code (the input) can be run and instantaneously outcomes (the output) can be viewed. Stephens points us towards the idea that

These new resources allow us to redefine what we mean by concepts such as ‘textbooks’ and even ‘learning objects’.


It took Tony a matter of seconds to do this at a cost to him of approximately £0.03 per hour of usage. Tony, in England, communicated the IP address for the web application auto-generated by Digital Ocean in the form of a URL to access the container that he had spun up to Stephen, in Canada. Half way around the world, Stephen browsed to this URL address and was able to access the application along with Tony. Essentially, the cloud made it possible for them to share a computing service over the internet.

In terms of its application to eLearning, cloud offers some powerful opportunities and benefits: virtual sandboxes could be easily created to test out proprietary software on (or any software) at much reduced costs to both educators and students in a variety of disciplinary contexts; through removal of the barriers of device, OS, particular device configuration etc., anyone with a web-enabled device and internet access could begin working independently, with peers, teachers, or colleagues, within the specific online learning environment of choice.

In particular, anyone who has used or is familiar with some Linux OS distributions will be familiar with package installation and the need to have dependencies installed in order for the package you want to work properly. Spinning up a virtual server to host a pre-built container including all necessary code and dependencies to support the application(s) you want to work on with students or colleagues, removes the problematic support requirement to troubleshoot  the myriad of different devices, OS’s, configurations, dependencies and versions that each individual person’s device can have. The cloud overcomes this – everyone starts off singing from the same hymn sheet, the same shared online computing environment.

Docker – Containerized Applications

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.


An aspiration that Tony felt would be worthwhile for institutions to work toward is that of developing institutional clouds – for more institutions to begin to offer Docker machines (or similar) that staff and students could log into via URL from their personal desktop computers. These machines run on institutional servers, could, for example, host a multiplicity of Docker containers or cluster containers catering for an extensive array of disciplinary and trans-disciplinary subject matter that interfaces through a web browser. Ideally it could be mounted to each individual’s own file store so that items could be launched and saved back to the file store. Currently, using the Digital Ocean virtual server setup described above, it’s not possible to save work within the Jupyter Notebook running within the Docker container. Work done during a session is destroyed upon exiting the application unless it is downloaded in the specific file format (.ipynb) that can be re-uploaded to continue working on it the next time the containerised application is being used. 

Here’s a snapshot of Tony’s ‘showntell’ Github respository. Notice that he has branches with different Jupyter Notebooks for astronomy, chemistry, computing, electronics, engineering, linguistics, etc. Same application, multiple trans-disciplinary uses and users.

In a context where I am reading more and more about ownership of our own data and self-hosting as a possible means of reclaiming digital identity, Keith Harmon set a interesting and intriguing scene over on his blog:

As a complex system myself, I self-organize and endure only to the degree that I can sustain the flows of energy (think food) and information (think EL 3.0) through me. The cloud is primarily about flows of information, and the assumption I hear in Stephen’s discussion is that I, an individual, should be able to control that flow of information rather than some other person or group (say, Facebook) and that I should be able to control the flow of information both into and out of me. I find this idea of self-control, or self-organization, problematic—mostly because it is not absolute. As far as I know, only black holes absolutely command their own spaces, taking in whatever energy and information they like and giving out nothing (well, almost nothing—seems that even black holes may not be absolute).


Keith provides a deeper insight into his perspective and extends an invite to his readers to journey with him outside, where he likes to think about these kind of discussions.

It helps me to walk outside for discussions such as this, so come with me into my backyard for a moment. The day is cool and sunny, so I’m soaking in lots of energy from sunlight. I’ve had a great breakfast, so more energy. I’ve read all the posts about the cloud in the #el30 feed, so I have lots of information. Of course, I’m pulling in petabytes of data from my backyard, though I’m conscious of only a small bit. Even with the bright light, I can see only a sliver of the available bandwidth. I hear only a little of what is here, and I certainly don’t hear the cosmic background radiation, the echo of the big bang that is still resonating throughout the universe. I’m awash in energy and information. I always have been. Furthermore, I can absorb and process only a bit (pun intended) of the data and energy streams flowing around me, and very little of this absorption is my choice. Yes, if the Sun is too bright, I can go back inside, put on more clothing, or put on sunscreen, but really, what have I to do about the flow of energy from the Sun?


I’m going to take some time to mull over thoughts of being
truly able to control the flow of information both into and out of me. It’s an intriguing question.

If anyone is seeking to implement Jupyter Notebooks into their practice, Tony has authored a resource entitled “Getting Started With Jupyter Notebooks for Teaching and Learning”, which might be useful.

If you’d like to test-drive Jupyter for yourself you can do so at

If your HEI provides Microsoft O365, your credentials should be able to log you into notebooks.azure.com to run notebooks from.

Google also offer a notebook environment at colab.research.google.com which includes integrated storage within Drive.

#EL30 Week 1 – Data


Week 1 of #EL30 addressed the topic of Data. Within that, two core conceptual challenges relating to eLearning were explored, “first, the shift in our understanding of content from documents to data; and second, the shift in our understanding of data from centralized to decentralized.”

All of this exists within the backdrop of “what is now being called web3, the central role played by platforms is diminished in favour of direct interactions between peers, that is, a distributed web”. The topic of data is relatively new to me and I am figuring much of it out as I go.

Our data exists online across multiple distributed nodes and each of us embodies the unique identifier that links all of this data together. In Stephen’s week 1 data summary article he highlights how digital data is beginning to permeate many aspects of our lives – “We are beginning to see how we generate geographic data as we travel, economic data as we shop, and political data as we browse videos on YouTube and Tumbler. A piece of media isn’t just a piece of media any more: it’s what we did with it, who we shared it with, and what we created by accessing it.” The traces of data we leave behind of where we’ve been online creates a depiction of us for those that can see it, an online identity, from breadcrumbs in the digital woods.

Activity – Conversation with Shelly Blake-Plock

Week 1 conversation with Shelly Blake-Plock, Co-Founder, President and CEO of Yet Analytics

The week 1 conversation with Shelly Blake-Plock, Co-Founder, President and CEO of Yet Analytics covered a range of interesting topics. Discussion ebbed and flowed and touched upon concepts such as

  • using data in actionable ways to understand learners, to improve instruction and content and to manage data systems that support learning,
  • the Experience API (xAPI) specification,
  • the xAPI enterprise learning ecosystem,
  • Learning Record Store (LRS),
  • data ownership and management,
  • identity management applications,
  • the privacy trade-off of these systems.

There was good discussion around Experience API, commonly abbreviated to xAPI, a modern specification for learning technology that helps to turn learning activities, experiences and performance into data. Shelly was the Managing Editor of the IEEE Learning Technology Standards Committee Technical Advisory Group on xAPI (TAGxAPI) who created a technical implementation guide for xAPI.

Essentially, xAPI was created as a way of tracking learning experiences and performance that extends beyond the bounds of our traditional Learning Management Systems (LMS) and the content and activities that learners launch from within them. It facilitates an individual’s learning to be recorded and moved more freely from siloes such as the LMS, as long as it in xAPI format or can be converted to it. The notion is that learning occurs everywhere, it’s not simply confined to the LMS or to the classroom, and now it’s possible for the data generated from learners’ experience and performance (online and offline) to be tracked and sent via x API statements (signals) from a range of different origins such as mobile apps, simulations and games, and the physical world through wearable technology, sensors and online games.

With this data it becomes possible to analyse and understand how learners are learning and potentially improve the content and activities that they receive. xAPI statements about learning experiences can then be hooked up via a number of launch mechanisms to a Learning Record Store (LRS) to collect reams of data about how the learner interacts with their learning environments. Analysis of this data can be automated through machine learning algorithms depending on what type of information is being sought.

How xAPI works with the LRS


Most of us have likely become familiar with the term ‘surveillance capitalism’ as the purported business model employed by many web2 corporations and platforms. Online data generated by each of us (our digital footprint) is already bought and sold to online advertising and marketing agencies. We unwittingly and nonchalantly give our ‘consent’ to it by clicking agree to the terms and conditions of the seemingly ‘free’ online platforms and services we sign up to.

The ‘business model’ is explained early in this presentation by Laura Kalbag of Ind.ie:

Laura Kalbag speaks about indie design at WordCamp, London

When viewing all of this through a critical lens, talk about tracking and gathering learner data for analysis immediately brings with it the need to talk of a range of considerations around ownership, ethical use, privacy, security, and data governance. I’ve noted similar sentiments from many of my fellow #EL30 participants.

The use of learning analytics to support the student experience could afford valuable insights, but there are ethical implications associated with collection, analysis and reporting of data about learners.

According to Rebecca Ferguson (2012), “Learning analytics is “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs.”

JISC UK’s Code of practice for learning analytics, authored by Niall Sclater and Paul Bailey, provides very helpful guidance in this regard beneath 8 key headings identified to help institutions (and possibly other organisations) understand and carry out responsible, appropriate and effective analysis of the data that they gather:

  1. Responsibility
  2. Transparency and consent
  3. Privacy
  4. Validity
  5. Access
  6. Enabling positive interventions
  7. Minimising adverse impacts
  8. Stewardship of data

Niall Sclater also compiled a literature review of the ethical and legal issues for this code of practice, in which he collates some critical ethical questions from a diverse literature authorship in relation to many of the areas identified in the code of practice. Here’s a snapshot of some of the thought-provoking questions posed in that review:

Ethical questionsCode of Practice area
1Does the administration let the students/staff know their academic behaviours are being tracked? (Hoel et al., 2014)Responsibility
2Does an individual need to provide formal consent before data can be collected and/or analysed? (Campbell et al., 2010)Transparency and Consent
3How transparent are the algorithms that transform the data into analytics? (Reilly, 2013)Validity
4Who can mine our data for other purposes? (Slade & Galpin, 2012)Stewardship of data
5Who is responsible when a predictive analytic is incorrect? (Willis, Campbell & Pistilli, 2013)Privacy
6Does [a student profile] bias people’s expectation and behaviour? (Campbell et al., 2010)Minimising adverse impacts

On the #EL30 course I’ve read a bit about IndieWeb, a community based on the principles of owning your own domain and owning your own data. IndieWeb attempts to make it easy for everyone to take ownership of their online identity and believes that people should own what they create. https://opencollective.com/indieweb#about I definitely want to explore this further in light of the next generation of learning technologies.

#EL30 Introductory post

I’m probably a little late to the eLearning 3.0 MOOC (#EL30) party, nonetheless, I’m hoping to try to avail of this opportunity to learn from Stephen Downes’ MOOC and from the network of experienced people created from the Connectivist learning approach that he employs (more info. here and here). It’s already clear to see a diverse, energetic, knowledgeable network emerging in the course feeds area and I hope to contribute to this community where I can. At least this blog post should allow me to submit my feed for RSS harvesting!

An introductory article by Stephen entitled ‘Approaching E-Learning 3.0‘, had me immediately hooked:
“If you’re reading this, then this course is for you. You’ve demonstrated the main criterion: some degree of interest in the subject matter of the course.

I’m certainly interested, so let’s give it a go!

The focus of #EL30 will be to explore key domains that Stephen envisages within the next generation of distributed learning technology. The main topics being explored are laid out in this image.

#EL30 Topics
#EL30 Topics

In the presentation Stephen gave to launch #EL30, he rounds out the detail of each of these topics and considers the impact of the next wave of emerging and distributed learning technologies:

#EL30 launch presentation by Stephen Downes

By way of a quick introduction, I work as Technology Enhanced Learning Manager in Graduate & Professional Studies at the University of Limerick, Ireland. I’m involved in the design and production of flexible online and blended programmes and research of same, and on shaping related institutional structures and processes.

I’m interested in open and online learning, educational technology, instructional (learning experience) design, technology in general, and all associated literacies. I’ve been thinking about establishing my own web presence for some time and participating in this MOOC has given me the impetus to go and do it.

Already an abundance of topics have piqued my interest – linked data, web (re-)decentralisation (see SoLID, created by Sir Tim Berners-Lee), IndiewebgRSShopperWebmentionsthe Fediverse, RSS aggregation and harvesting, and many more.

I look forward to exploring and understanding them in greater depth.