#EL30 – Experience/Creativity Task

The penultimate week of the #EL30 MOOC, Experience.

The learning task as set by Stephen called for us to creatively represent our experiences of the #El30 cMOOC and to post about it:

Be creative! Using the medium of your choice, create a representation of your experience of E-Learning 3.0. Then post your creation (or post a link to your creation) on your blog.

Here’s a good example of the sort of thing you could create, by Kevin Hodgeson (who apparently also studied mind reading as he completed this Task before it was posted).

If you need inspiration, visit the DS106 Assignment Bank and select one of the assignments, and then interpret it in the light of E-Learning 3.0.


Be creative! No problem…!  Creativity has most often been something that has spontaneously come to me rather than something I felt I could tap into on-demand.

This was a tough one but enjoyable once I reflected on it . And the example given of Kevin’s work was incredible.

At this time of the year, I feel a bit like a pot that’s almost reached the boil and need a valve to be released. So it was fantastic to have the inspiration of other participants creative posts and the inspiration of the online exchange between Stephen and Amy Burvall to refer to this week.

Conversation between Stephen Downes and Amy Burvall

Creating a representation of my experience of E-Learning 3.0

To represent my own experience (in a festive way as it’s almost Christmas!) I’ve chosen to capture some pictures of a winter tree with lights and create a quick GIF. The representation I’ve chosen comprises some metaphorical elements which I will explain my interpretation of below.

With this image my intention is to focus on growth, the notion of emerging community, change and adaptation, decentralisation, a blend of both the natural (tree) and human (electrical lights), independence and interdependence, and the collective whole.

I did not create or plant the tree, which I feel is apt for this #EL30 experience/creativity task. I’m just capturing my sense of it by taking a snapshot, the view of it from my perspective.

The tree as a structure is always growing, ever-changing and adapting. It sheds its leaves at certain times and grows new ones. It is both a product of its environment and something that shapes it. Much the same can be said about the future of education and e-learning.

The root structure of the tree also serves as an interesting metaphor. The underground roots, not visible in the photograph, might well describe all that remains hidden on the #EL30 journey, nonetheless, the hidden roots are vitally important to sustain the tree’s survival and growth. Many details, messages and talking points from this iteration of #EL30 are likely being discussed and analysed in lots of physical and virtual spaces and places – blog posts and social media are but some of these spaces. These thoughts and conversations might be hidden but they are no less important to the future of education and e-learning.

The lights on the tree might represent more than one thing . Light is seen as something hopeful, something illuminating and good. The network of lights might be the modules and topics we have encountered during #EL30, the collective neural network of our posts, the ideas and creative learning artefacts which we’ve all contributed to #EL30, etc., etc.

But I prefer to think of each light on the tree as a person from the group of #El30 participants. The lights are all connected together by an electrical current, wires and cable (these vectors could all be metaphors of their own – I’ll let that up to you to decide) to form the graph or network structure of lights that are distributed in a decentralised way all around the tree. Perhaps it might have been better if the lights were all of different colours, or if some faded on and off, to represent the diversity of each of us, our interactions, our interests and our perspectives.

No node or light stands out as being the centre. There is an evident independence and interdependence. There is no light visibly larger or brighter than another, a concept I feel that Stephen has tried to bring across in this course – everyone can and should be encouraged and empowered to actively and creatively contribute to the decentralised community experience and to take opportunities to communicate directly with other lights.

Featured image by Free-Photos


Some things I’ve experienced so far on this #EL30 journey

The latest topic for #El30 was Community. Stephen set an open ended task for course participants to, as a community ( I believe this was meant in a loose way), come up with and reach consensus on a task, the completion of which denotes being a member of the community.

As a community, create an assignment the completion of which denotes being a member of the community. For the purposes of this task, there can only be one community. For each participant, your being a member of the community completes the task.


Further to this, Stephen added his own nice decentralised twist

This week’s task is deliberately open-ended. It requires the formation of a community, but only one community, with tangible evidence of consensus. How to do this? How to even get started? That’s the challenge…

Some people may ask, “What’s the point?” Well, as we discussed in this week’s conversation (also in this newsletter) it’s a challenge to create consensus without deferring to an authority – a trusted source, if you will. In a course like this, that’s usually the instructor. But not this time. This is – on a small scale – the same problem we have on a larger scale. How do we create consensus with no common ground?

This task is challenging on several fronts. Can a community be created at all? What if there are competing communities? How many participants can the community actually encompass? How do people join at all? The conditions for succeeding in this assignment are very simple – be a member of that community. But the manner in which this is to be accomplished is not clear at all.


Out of this challenge sprung some proposals in the form of blog posts on participants sites. I was delighted to see this as I had been puzzling for a short while on how we as a group of participants would manage to get started with trying to reach a consensus without a ‘central’ node that we all had access to – the actual MOOC itself probably being the only node I could think of, and none of the participants outside of Stephen can edit that directly. Anyone who completed the earlier task of subscribing to the course feed list through their RSS aggregator of choice would have been able to see the proposal posts for this task appear there. The first proposal I encountered on my RSS was from Roland

I suggest we all post about our experiences in this course. It would be a short or long piece about the content, the way it’s being organized, the way the learners did or did not interact with each other or how we reacted in blog posts and on social media.
Such a post seems like a natural thing to do, there are no good or bad posts, yet it would affirm our being together in this thing – #el30.


I really liked the idea of posting about our course experiences to date and commented on Roland’s post agreeing with his proposal as tangible evidence of trying to arrive at group consensus. I had seen that other participants had also expressed their agreement in the comments section of the post.

With that proposal in mind I will catalogue a brief synopsis of my experiences to date on the E-Learning 3.0 journey.

My experiences of this course

Researching, reading and writing takes a substantial time for me. I find it difficult. Derek Sivers sums me up perfectly when he writes

I’m a very slow thinker.
When a friend says something interesting to me, I usually don’t have a reaction until much later.

When someone asks me a deep question, I say, “Hmm. I don’t know.” The next morning, I have an answer.

I’m a disappointing person to try to debate or attack. I just have nothing to say in the moment, except maybe, “Good point.” Then a few days later, after thinking about it a lot, I have a response.


I could also add the word ‘reader’ to that.

In truth, this is the first cMOOC I have ‘registered for’ and participated in (as best I could at least; I have a lot of learning and practice to do to improve my overall participation online). I must say that I have thoroughly enjoyed the experience. It has been a challenge but a wholly worthwhile one. The content and conversation has been of a very high-quality.

I don’t think I would consider participation on #EL30 as the formation of a ‘community’. Despite that, in a way, I still feel some sort of connection to the group. A shared journey or the likes. Here I will defer to more knowledgeable participants who provide useful descriptions for what they believe it to be:

At most, I would compare it to the residence or municipality community which is defined by something like a common zip code (here, by using the hashcode el30), and whose residents have, in a certain limited sense, a common ‘fate‘ (again limited, to the 9 weeks).

Matthias Melcher (x28)

An affinity space is a place – virtual or physical – where informal learning takes place. According to James Paul Gee, affinity spaces are locations where groups of people are drawn together because of a shared, strong interest or engagement in a common activity.

Kevin Hodgson (dogtrax)

Much of the course content is still in flux in my mind. I often find myself mulling over it to see if I have understood it well or considering where it might be applied in my practice. I have found all of the topics to date very interesting, even though I have admittedly grasped some of them far better than others. I really enjoy both the technical and academic focus that comprises each topic.

Some of it sits quite easily with the work I am doing in higher education. More of it is a little like trying to stand still on shifting tectonic plates.

I agree with the original intent and premise that

the idea of the Internet – distributed, social, networked – influences the structure of education, teaching, and learning.


and subscribe to this in an ideological sense, however, the many ways that this might manifest in higher education remains to be seen. I suppose from being a reader and great admirer of Audrey Watters extensive work, I am a bit skeptical of anything that attempts to predict future trends of educational technology and of what drives that within higher education. Saying that, this experience has been different.

I love how Stephen used his own open source software (gRSShopper) to build the MOOC on, how he emphasised free use of an RSS feed aggregator to keep track of each participants blog posts on, and of how he highlights the importance of decentralisation versus centralisation of power and influence. This is all incredible and something I feel I also buy into.

I learned a huge amount from reading Stephen’s commentaries, the links provided to resources, and from all participants blog posts via my RSS aggregator.

As someone who actually created this website in order to be able to communicate and interact on #EL30, the major difficulty for me has lay in trying to keep apace with regular tasks and with writing something coherent.

Design of the course and Interaction

As someone who works in learning experience/instructional design of  online and blended programmes for postgraduate and professional education, I really enjoyed the explorative and reflective nature of the course. It was very clearly structured, module by module. I loved the synopsis for each topic and the links to further resources to aid expand upon thoughts, if you so wished to. From the moment I saw the course, I felt it was very intuitive. I thought the predominantly asynchronous nature of it suited very well. It allowed me time to

  • read about, reflect on and grasp complex concepts,
  • view weekly online video conversations between Stephen and guests in my own time,
  • review resources sourced by Stephen, other participants and myself,
  • keep an eye on the backchannel social media streams – Twitter and Mastodon,
  • generate my own ideas to write blog posts about them,
  • read and comment on fellow participants blog posts,
  • complete tasks and build competencies along the way.

Stephen certainly walks the talk of Open Educational Practices (OEP) with #EL30. Each week we are provided with a link to view the working summary article for that weeks topic. Not alone are we provided with just a link to view, we are always actively encouraged by Stephen to contribute to the document if we feel that we can – to post comments, suggestions, further resources. This is the first time I have encountered this and to be honest I don’t feel knowledgeable enough to contribute, even though I know it is possible. I tend to take a sneak peek at the end of topic article before it comes out on the newsletter as it is always extremely informative and detailed.

I see that some people live out a portion of their lives on social media. I am not one of those people. For some reason I find I have somewhat of a natural inclination to not post on social media platforms, even though I do have Twitter and Mastodon accounts. It’s not that I think sharing is a bad thing, I just often find myself self-questioning myself before making a post – “Is this really going to be valuable to somebody” or “will it make a difference if I post this or not”? Very nihilist of me! I’m probably a little bit independent and happy to be – engaging on social media doesn’t captivate me.

Some people participating on #EL30 are far more accomplished and prolific bloggers, writers, educators and names in their respective fields. It has been a pleasure and privilege to communicate and interact on some level with each of them.

A key take away for me from the experience to date is that I need to make more time to get my thoughts out and down – outside of participating on a MOOC – and to become more proactive about posting rather than reactive to what others are writing.

Featured Image by Anthony Tori on Unsplash


#EL30 – Recognition Task

I’m falling a little bit behind on #EL30 at the moment, hoping to put some time aside to catch up properly in the coming weeks.

For the ‘Resources’ module of the course, Stephen set the following task for us:

Create a free account on a Badge service (several are listed in the resources for this module). Then:

– create a badge
– award it to yourself
– use a blog post on your blog as the ‘evidence’ for awarding yourself the badge
– place the badge on the blog post.

To assist you in this, you can see this blog post where I did all four steps with Badgr. (I also tried to work with the API, with much less success).


I had previously created an account on Openbadges.me so I decided to use that badge service for this task. I’ve seen a couple of posts already from #EL30 participants that demonstrate how they created and issued their badges using Badgr, so this post will provide my experience doing it on a different badge service for comparison/anyone that’s interested.


There were 4 main tasks for me to complete on Openbadges.me in order to be able to successfully create and issue my badge on this blog post:

  1. Setting myself up as a badge issuer
  2. Creating or uploading a graphic to represent my badge
  3. Creating and Publishing my badge for issue
  4. Issuing my badge to a recipient (me! – but I can issue it to you too, read on ’til the end if you’re interested in adding this badge to your collection)

1. Setting up as a badge issuer

After logging into Openbadges you are presented with a dashboard area with a navigation menu to the left hand side. The main content area of the dashboard displays tiled flashcard-like blocks that each explain different things that Openbadges can be used for. The navigation menu on the left allows us to go to the areas of the site which will allow us to create and issue badges, so that’s where I begin.

Following Stephen’s useful blog post based on his experiences using Badgr, another badge service, I set about the first task – Setting up as a badge issuer. I selected the Preparing badges menu item and the Badge issuers sub-item. In the main content area I was presented with a pink button in the upper right-hand corner to “+ Create issuer”, as I hadn’t created issuers before. I clicked the button and followed the onscreen instruction to set myself up as a badge issuer, giving myself the permission to issue badges that I create.

Setting up a badge issuer
Setting up a badge issuer

2. Creating or uploading a graphic to represent the badge

As I didn’t have a pre-made graphic to use for my badge I decided to create one using the Openbadges Graphics library, which was accessible as a sub-menu item from the Preparing badges menu item.

Similarly, after selecting this sub-menu item I was presented with a pink button in the upper right-hand corner of main content area of the screen that prompted me to “+ Create graphic”.

The user interface was intuitive and allowed me to create and adapt a graphic of my own beneath 7 simple headings:

  1. Background
  2. Shapes
  3. Inner shapes
  4. Text
  5. Curved text
  6. Banners
  7. Icon
The Graphic builder interface

For my badge I used 3 of those headings – Shapes, Curved text and Icon. Once I was happy with the look of my graphic I saved it and was then able to preview it. The next step was to create the framework for the badge including the awarding criteria for someone looking to be awarded it.

3. Creating and Publishing the badge

It’s important to recognise that the graphic is only one component of the badge. Often, people can think that a badge is simply a .jpg or .png image, but it’s the metadata that we don’t always see that’s ‘baked into’ the badge  which is the most valuable aspect of it. This includes the criteria in order to be awarded the badge, any badge attributes (i.e. a badge awarded for completing 3 hours of CPD), the date of receipt, the date of expiry (if the badge needs to be renewed) the issuer details and the unique badge ID:

Adding primary badge details
Adding badge criteria
– the items that need to be accomplished in order to be awarded the badge
Adding badge attributes
i.e. An open badge can contain any number of attributes. For example, you may want it to represent 3 hours of CPD. To do this, you would enter the name as ‘CPD’ and the value as ‘3’.

I added the relevant details, criteria and attributes to my badge and also selected the graphic that I had created to visually depict it. I was satisfied with my badge so I clicked the grey ‘Publish’ button in the upper right-hand corner of the screen to finalise my badge creation.

Importantly, a badge can only be published if it contains valid name, description, issuer, criteria and graphic and once you publish the badge you cannot go back and edit it easily unless you unpublish (withdraw) it firstly which remove all the details, criteria and attributes you have added to it. This is important to bear in mind so if you are not sure about your badge I would advise using the pink ‘Save Draft’ button before making the decision to finally publish.

My badge summary including auto-generated badge ID on the left-hand side

4. Issuing the badge to a recipient

With the badge created the last step was to issue it to a recipient, in this case, myself. I clicked the Issuing badges menu item and the Manual entry sub-menu option. There were other options available to issue it to groups, via API or through rules-based issuing, but manual was the easiest option initially when I knew I was only issuing it to one person.

The badge I had created appeared here and once I moused-over it I was given the option to ‘Issue badge’ via email. I clicked that option and simply entered my own email address in the space provided, clicked ‘Add recipient’ and finally ‘Issue badge’. I immediately received an email from Openbadges.me prompting me to click a link to download my badge.

Email issued from Openbadges.me

And here is what all the fuss was about!

My #EL30 ‘Recognition’ badge – awarded for completion of the task set out by Stephen Downes

For those of you who may be interested, Openbadges.me also provides reporting functionality to keep track of who has been issued the badge and when it was issued, which is accessible from the main navigation menu beneath Reporting:

Badge reporting

Want to be a recipient of my badge?

Just leave a comment on this post with

  • a link to your own badge (as evidence that you’ve completed Stephen’s task)
  • the email address you’d prefer me to send along your badge to

and I’ll email you a link to download it.

Featured image:
 “Open Educational Resources – OER Rocket Badge with Moon 360×360 PNG” by Eugene Open Educational Resources is licensed under CC BY-NC-SA 2.0


#EL30 – Resources Task

This week, Stephen tasked us with creating a Content-Addressed Resource on the distributed web or Dweb.

Create a resource (for example, a web page) using IPFS, Beaker Browser, Fritter, or any other distributed web application (see some of the examples here). Provide a link to the resource using any method you wish.

To help prepare for this task, watch the video ‘From Repositories to the Distributed Web‘ as well as these videos on IPFS and Beaker: installing IPFS, making a website with IPFS, installing Beaker.


The final results

The Dweb?

Dweb – The letter D in the popular shortened version stands for either decentralised or distributed. Here’s a visual depiction:

Image Source

The conceptual framework behind the creation of a decentralised or distributed internet is an attempt to replace, improve upon, or at least run parrallel to, the current centralised web (based on the web 2.0 premise). A feature of this current web is siloed platform specific engagement.

In part, this shift in thinking has transpired as a strong reaction to the centralisation of control and power in the hands of a few giant internet and tech companies. A decentralised, distributed web would, in theory, more evenly spread the balance of power, control, etc. across a network of much more people, in the hope of removing the reliance on using web 2.0 platforms to communicate with one another, thus diluting their influence.

Further, the dweb has been brought about by the need to create an immutable way to preserve the largest open and accessible collection of human knowledge ever created.

Here’s how I did it

My laptop screen while installing and initialising IPFS

Following Stephen’s video’s (linked to above) was very straightforward. They are very clear and thorough. The only real hurdle I had to overcome was that I was performing the operation on my Mac laptop rather than on a Windows device. All in all, performing the operations was much the same.

Downloading and extracting IPFS

The first task I set about was downloading and installing the go-ipfs distribution implementation of the Inter-Planetary File System (IPFS).

I unzipped/extracted the go-ipfs download into my home directory (davidmoloney$ in my case). It isn’t as easy as double-clicking on the .exe file in the directory and following an installation wizard I’m afraid! You are the installation wizard!


I opened up the terminal application on my Mac to begin the process of installation. First things first, I changed directory into the new go-ipfs one which, if you remember, I had saved to my home directory.

cd \go-ipfs

In Stephen’s video, in order to list the subdirectories within the go-ipfs directory on his Windows device, he types:


into the Microsoft Powershell. On the Mac terminal the equivalent command is

Running the ls command in Mac terminal within the go-ipfs directory

You could also type

ls -l

if you were looking for some further detail.

Running the ls -l command in Mac terminal within the go-ipfs directory


It’s the “ipfs” file listed on the right hand side of the “ls” screenshot above that I am initially interested in.  I want to initialise this file in order to establish my IPFS node. To do this this I type

../go-ipfs/ipfs init

Initialising the ipfs file generates a hashed peer identity for my IPFS node on the distributed web. It also creates a link for me to progress to the next step and open the Readme file.

I copied the hashed value that had been generated in the terminal (by highlighting it and using Command+C) in order to open the Readme file, also ensuring to copy the


code that comes immediately before the hashed string. Stephen notes the importance of this in his video. Without the “cat” prefix piece the command will not run properly.

I typed and pasted the following command into the terminal to open the Readme file. You will replace the *ABCD* section in the code below with the hashed public key that is generated for you (although I think it’ll probably be the same anyway – ending in “Vv”).

../go-ipfs/ipfs cat /ipfs/*ABCD*/readme

The Readme.md file should open for you within the terminal.

Initialising IPFS via the Mac terminal and opening the Readme file

Starting the daemon service

Once initialised it is important to get the IPFS service up and running by starting the daemon service

../go-ipfs/ipfs daemon

I love how Stephen pronounces daemon in his video – demon! Very Irish!

The IPFS Companion Add-on for Firefox

Ordinarily, it seems that by then browsing to the web address produced after running the daemon service ( I should be able to access IPFS via my browser. However, without the IPFS Companion add-on, I couldn’t get to that point using Firefox.

Once the IPFS Companion add-on is installed you can click the small icon in the browser bar and then select the Open Web UI option to view a dashboard interface.

IPFS Companion add-on with option to “Open Web UI”
IPFS companion showing a connection to 496 peers
Dashboard UI of my IPFS node and distribution of peers chart
The geographical distribution of peers

Creating and hosting my simple website on IPFS

I followed Stephen’s guidance in his video and used Gio d’Amelio’s quick tutorial to get my simple website set up and hosted on IPFS.

Using the Sublime Text 3 text editor for code, I copied and pasted the sample text provided in the quick tutorial and created basic index.html and style.css files. I saved the files in a subdirectory that I similarly named “ipfssite”, following Stephen’s lead.

Adding my site to IPFS using the terminal

I opened a new terminal window and changed directory into the go-ipfs directory within my home directory again

cd \go-ipfs

From here, I ran the following command to add the .html and .css files within my ipfssite subdirectory to my IPFS node.

./ipfs add -r ipfssite

Doing this produces hashed values for all of the files and also the entire site. The very last hash value before the command completes is the hash value to use to browse to your IPFS site.



And tada!

Finally, I decided to install the beaker browser and establish a new .dat website also. This post is already a little unwieldy so I won’t detail how that went, but it was all pretty straight forward. Ultimately, I simply followed along with Stephen’s video instruction which was excellent.

The first version of my .dat site on Beaker browser

Featured image, Universe, by geralt


#EL30 Week 4 – Identity

The world changes. Some people don’t.
You learned things that were true back then, but now they’re false.
You got successful doing things one way, but now that way is moot.
You still consider yourself an expert, but that expertise has expired.
You dug so deep into something that you lost perspective, and didn’t realize the landscape had changed.
Sometimes it’s just a change in situation. The strategy that got you to where you are is different from the strategy that will get you to where you want to be next.


In #EL30 this week the focus was Identity.

This post contains more questions than answers, more randomly assorted out-loud thoughts than anything else. I’m prepared to be ‘not quite there’ in my interpretations of much of this. It’s all a work in progress, ironically.

Identity is a deep and complex topic and one that could be discussed in a variety of different ways. It is both a personal (internal) and social (external) construct. It isn’t solely what we think of or communicate about ourselves, our self-image, but what others think of and communicate about us also. Consideration of identity from a psychological perspective through the work of Carl Rogers can incorporate both aspirational and fantasy elements. We see this more often nowadays with people on social media portraying a projected sense of self or a more ideal version of themselves through their publicly broadcasted social media, and other people providing their impressions about that through liking, sharing, following, friending, etc

For me, identity is more a perpetual interplay of elements within different contexts rather than a finished product; it’s also more the plural than the singular. Our self-concepts about our identity are likely to change as the world around us changes and our role changes within it. Identity is never complete, it is ever in-process. Who we are and what we do is multi faceted, changeable, and imperfect. And my understanding of it is much the same.

The essence of identity might refer to the type of person we are recognised as being, both internally and externally, at a certain point in time. The term being is inclusive of the type of person we were in the past and the one we might become in the future also. 

Over on Jenny Mackness’ blog, she wrote the following piece,
quoting renowned social learning theorist, Etienne Wenger, which really resonated with me.

It is not just what we say about ourselves or what others say about us. It is not about self-image, but rather a way of being in the world – the way we live day by day – He [Etienne] expands on this on p.151 of his book, writing:

An identity, then, is a layering of events of participation and reification by which our experience and its social interpretation inform each other. As we encounter our effects on the world and develop our relations with others, these layers build upon each other to produce our identity as a very complex interweaving of participative experience and reificative projections. Bringing the two together through the negotiation of meaning, we construct who we are. In the same way that meaning exists in its negotiation, identity exists – not as an object in and of itself – but in the constant work of negotiating the self. It is in this cascading interplay of participation and reification that our experience of life becomes one of identity, and indeed of human existence and consciousness. (p.151)

Blog Source and Book Source

Conversation with Maha Bali

#EL30 Week 4 Identity – conversation between Stephen Downes and Maha Bali

In this week’s conversation, Stephen explored the topic further with Maha Bali. I was already aware of some of Maha’s work through the work of Dr Catherine Cronin. Stephen and Maha spoke about the composition of identity, whether elements are internal or external, how our activities and our identity relate, and about a number of Maha’s activities, including Virtually Connecting and the ongoing Equity Unbound course.

Maha described identity in a blog post she wrote prior to the conversation as evolving, dynamic, and contextual. In it, Maha spoke about recognition of who and what we are as a fluid concept dependant upon a range of factors – our perception of self, others perceptions, comparative perspectives, the particular time in our life that it is, etc. Personal identity is something that is constantly negotiated. As Maha Bali said in that blog post, her Virtually Connecting co-creation felt like an extension of herself. What she helped to create felt like a part of who she was. The conversation finished on a very interesting note with both agreeing that identity was qualitatively different than the sum of its parts.

Another key takeaway from the conversation was the discussion about choice. We choose to actively take up an identity or choose to identify with something, like being ‘resilient’, and choose not to identify with other things, like being ‘a quitter’. Each of us are selective with knowing what we are, and knowing what we are not. 

“Identity requires some element of choice.”

“Identity is marked by similarity, that is of the people like us, and by difference, of those who are not.”


Digital Identity

Identity and digital identity are not one and the same. Someone without access to the internet still has an identity. In a presentation I’ve given previously entitled ‘Who Am I Online?’, I portrayed digital identity (in particular) using the concept of an identity box. Inside the box is what you think of yourself, your perceptions of all that you identify with – the personal. The outside of the box represents external thoughts about your identity, what you are socially seen to identify with or the parts of your identity that you may not have as much control over shaping, such as the digital footprint created about you from the traces of data you leave behind yourself online by ‘forces beyond our control’.

Identity Box idea. Vinyl Cube by Carson Ting on Flickr.

“If identity provides us with the means of answering the question ‘who am I?’ it might appear to be about personality; the sort of person I am. That is only part of the story. Identity is different from personality in important respects. … an identity suggests some active engagement on our part. We choose to identify with a particular identity or group. … [the] importance of structures, the forces beyond our control which shape our identities, and agency, the degree of control which we ourselves can exert over who we are.


I was attempting to get people to comprehend identity as something that we have control over certain elements of, but our agency with regard to complete control over it is limited.

As part of the session I delivered, I used the Lightbeam plugin for Firefox, linked to below. I explained to the audience that I was starting the plugin at the beginning and that over the course of my 40 minute presentation my browsing behaviour would be captured by the plugin. At the end of the session I displayed the graph visualisation and displayed the kind of identity profile that was being built about me behind the scenes while I had been giving the session. The visualisation listed the sites that I browsed to during the session and also listed the trackers that had been following me from site to site across the web as I browsed, generating an identity profile of me.

Sample screenshot of Mozilla’s Lightbeam from Wikipedia

Further reading and resources

Identity, Keys and Authentication

To view an insightful perspective into the future of identity and online authentication, this video from Stephen Downes explains the concepts of public and private key cryptography and introduces Yubi keys.

At this link, Bonnie Stewart speaks about Digital Identities: Six Key Selves of Networked Publics.

Here are some further resources to inform yourself about the digital traces we all leave behind in online environments and how to begin to counteract:

Featured image by Ben Sweet on Unsplash


#EL30 Week 3 – Graph

For #EL30 this week, the topic of Graph is explored. This blog post will not address our task for this week but will instead capture some of what I’ve been considering about the topic and some excellent resources I’ve found that have helped to shape my thoughts.

The graph (think network, community, ecosystem of connections) is seen as being an important conceptual framework for the movement from web2 to web3. In essence, I conceptualise this movement as the gradual removal of current platform middlemen operating particular business models, with the replacement of that with more direct communication and interaction between one another over the internet.

Graph constructs aid with depicting distributed networked systems. Network science helps us to understand the ways in which these systems operate.

Conversation with Ben Werdmüller

The concept of graph was asserted by Stephen at the outset of the week:

The graph is the conceptual basis for web3 networks. This concept will be familiar to those who have studied connectivism, as the idea of connectivism is that knowledge consists of the relations between nodes in a network – in other words, that knowledge is a graph (and not, say, a sequence of facts and instructions).

Graphs, and especially dynamic graphs, have special properties, the results of which can be found in social network theory, in modern artificial intelligence, and in economic and political theory.


Ben and Stephen delved deeper into this concept during their online conversation and touched upon a possible future that can be derived from the movement to web3.

Stephen described the common traits of a number of network structures and systems.

Some common networks:
– A social network (or social graph) which is made up of people (and sometimes bots pretending to be people) connected by relations of ‘friending’ or ‘following’ and interacting by means of ‘texting’ or ‘messaging’.
– A neural network, which is made up of neurons (or, in computers, artificial neurons), connected by means of axons or connections, interacting by means of ‘pings’ or ‘signals’
– A financial network, which is made up of accounts, which have ‘balances’ of various sizes, and which are connected through contracts and interact through transactions
– A semantical network (such as the Semantic Web), which is a collection of resources connected through an ontology and which interact through logical relations with each other.

In all of these, the core idea is the same. We have a set of entities (sometimes called nodes or vertices) that are in some way connected to each other (by means of links or edges or transactions or whatever you want to call the linkage and the interaction through the linkage).


Graphs help us to recognise the relationship between the actors and how they interact with each other within the environment. It isn’t necessarily the individual objects that we should focus on when conceptualising graphs, it’s the things that anchor the objects together that prove interesting, the connections.

It’s difficult for a graph to tell us everything about a networked system, they can be a very effective visual framework to help us recognise the constituent components of the network and how they all operate, however, it further allows us to begin to recognise elements that cannot be seen with our eyes – the invisible, underlying, subtle and nuanced contacts, properties, connections and interactions – the currents that flow within the graph structure that shape it, it’s electromagnetic force, for want of a better term. The graph itself depicts the physical format of a network but it is what the graph allows us to visibly and invisibly perceive in greater depth is what makes it important.

In connectivism we have explored the idea of thinking of knowledge as a graph, and of learning as the growth and manipulation of a graph. It helps learners understand that each idea connects to another, and it’s not the individual idea that’s important, but rather how the entire graph grows and develops.


An example of a directional graph might be one that looks at the reporting relationships in a hierarchically structured institution or organisation. At the top of the hierarchical structure might sit the CEO or President, beneath that the layer of Vice Presidents, Chief Operational Officers, Chief Finance Officers etc., beneath that layer might sit the directors of divisions or departments, beneath that the managers and coordinators layer and then the employees working within teams. The reporting relationships would commonly flow upward in one direction from the base of the hierarchy, layer by layer, towards the top.

A real life example of a static bidirectional graph might be a return journey flight path. For a flight where a stopover is required before arriving at the desired destination, you would depart from your origin airport, travel via your waypoint stopover, and arrive at your destination destination – let’s say Dublin (origin) to Copenhagen (destination) via Oslo (waypoint). The return journey demonstrates this in reverse.

Examples of dynamic graphs that may have either directional or bidirectional relations between vertices would be the internet, social networks and perhaps a less obvious example might be historical timelines (directional, as time advances). 

The use of dynamic living graphs appear across many of the fields of science, anthropology, psychology, etc. and are particularly common within computer science, machine learning, and artificial intelligence. In computer science we can perceive graphs as Merkle Trees and Directed Acyclic Graph’s (DAGs), used in GitHub software version control.

Some very helpful resources

Coming off the back of last weeks conversation with Tim Hirst, where we touched upon Jupyter Notebooks as a way of actively and dynamically learning and practicing, I thought I’d explore the web for an interactive dynamic simulation of Graph Theory in order to learn a little more about it. I came across a very useful Graph Theory resource created by Avinash Pandey on Github that models 3D graph structures. 

In my online search to better understand this topic I came across another great resource by Nick Case that helped me to think in terms of graphs and the science of networked systems. The resource is an interactive online game called Crowds. In it, Case attempts to deconstruct theories of network science to use as a plausible explanation for the phenomena we know as The Madness of Crowds/The Wisdom of Crowds.

There were a number of key takeaways from this resource for me. Concepts about complex connections were introduced wherein the greater exposure someone has to an idea amongst their social networks, the greater the chance that they will be influenced by it. Threshold factors might also influence whether an idea spreads beyond certain nodes in a network.
The resource further looks at the idea of contagion, explores consensus and concludes by examining the importance of such concepts as bonding within networks, bridging between networks, and the immutable influence of Small World Networks.

Screenshot from Crowds – Bonding, Bridging, and Small World Networks

One of the sources cited in Case’s resource is a book co-authored by Nicholas Christakis and James Fowler called Connected, which analyses the importance and value of connection, of network, of community and the influence each of us can have on it, and it can have on each of us. Here are links to the book and a thought-provoking TED talk on YouTube (18 mins), which gives an insight into the impact of our social networks. From their research they coined the ‘three degrees of influence’ theory, essentially arguing that, even though the influence dissipates, it may be possible for our actions, behaviours, etc. to have consequences and influence up three degrees of separation away from us (our friends’, friends’, friends). This theory is contentious.

In the TED talk, Christakis discusses common objects – a pencil and a diamond – both made from carbon. But the carbon atoms in each are connected and arranged in different ways. The lead in a pencil is soft/breakable and dark, diamonds are hard and clear. Both items have the same underlying properties, however, they are connected in different ways and this effects what they become.

Pencil lead and diamond – both made from carbon atoms, connected in different ways

According to Christakis, those properties (soft, dark, hard, clear)

“do not reside in the carbon atoms; they reside in the interconnections between the carbon atoms, or at least arise because of the interconnections between the carbon atoms.”

Source – timestamp 15.09
Structure of graphite and diamond – Connections matter

“So, similarly, the pattern of connections among people confers upon the groups of people different properties. It is the ties between people that makes the whole greater than the sum of its parts. And so it is not just what’s happening to these people — whether they’re losing weight or gaining weight, or becoming rich or becoming poor, or becoming happy or not becoming happy — that affects us; it’s also the actual architecture of the ties around us. Our experience of the world depends on the actual structure of the networks in which we’re residing and on all the kinds of things that ripple and flow through the network.”


Connections matter.


#EL30 Week 2 – Cloud

In the second week of #EL30 we explored the topic of Cloud. Stephen begins by introducing the idea:

The joke is that “the cloud” is just shorthand for “someone else’s computer.” The conceptual challenge is that it doesn’t matter whose computer it is, that it could change any time, and that we should begin to think of “computing” and “storage” as commodities, more like “water” or “electricity”, rather than as features of a type of device that sits on your desktop.


On initial reading I found this a difficult concept to wrap my head around. How could I consider cloud computing more like water? I found it a little difficult to interpret it in that way. However, the more I learned about the topic from the resources, the weekly activity, from reading fellow participants blog posts, and from my own research, the more that Stephen’s words carried greater weight and meaning for me.

One of the defining features of cloud computing is that it can be an on-demand self-service – “the cloud is a form of utility computing”. And like a utility service, take for example electricity in our homes, we can choose a provider, sign up and create an account with them and use the service whenever we need it, as much or as little of it as we can or want. We’re then billed by the provider for the extent of our usage of that service. The utility or commodity comparison makes clear sense in this regard. One distinct advantage that the cloud possesses is that by placing our data and the services we use on the cloud they becomes accessible to us from virtually anywhere, not just at home but on the train, in the office, and on any device that can access the web.

#EL30 Week 2 conversation with Tony Hirst

In the #EL30 guest conversation this week, Tony Hirst, Senior Lecturer in Telematics at the Open University, UK, spun up a virtual server using Digital Ocean into which he installed a pre-built Docker container and ran the Jupyter Notebook application. The Jupyter web app facilitates creation of a shared living document. Manipulations of the programming code (the input) can be run and instantaneously outcomes (the output) can be viewed. Stephens points us towards the idea that

These new resources allow us to redefine what we mean by concepts such as ‘textbooks’ and even ‘learning objects’.


It took Tony a matter of seconds to do this at a cost to him of approximately £0.03 per hour of usage. Tony, in England, communicated the IP address for the web application auto-generated by Digital Ocean in the form of a URL to access the container that he had spun up to Stephen, in Canada. Half way around the world, Stephen browsed to this URL address and was able to access the application along with Tony. Essentially, the cloud made it possible for them to share a computing service over the internet.

In terms of its application to eLearning, cloud offers some powerful opportunities and benefits: virtual sandboxes could be easily created to test out proprietary software on (or any software) at much reduced costs to both educators and students in a variety of disciplinary contexts; through removal of the barriers of device, OS, particular device configuration etc., anyone with a web-enabled device and internet access could begin working independently, with peers, teachers, or colleagues, within the specific online learning environment of choice.

In particular, anyone who has used or is familiar with some Linux OS distributions will be familiar with package installation and the need to have dependencies installed in order for the package you want to work properly. Spinning up a virtual server to host a pre-built container including all necessary code and dependencies to support the application(s) you want to work on with students or colleagues, removes the problematic support requirement to troubleshoot  the myriad of different devices, OS’s, configurations, dependencies and versions that each individual person’s device can have. The cloud overcomes this – everyone starts off singing from the same hymn sheet, the same shared online computing environment.

Docker – Containerized Applications

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.


An aspiration that Tony felt would be worthwhile for institutions to work toward is that of developing institutional clouds – for more institutions to begin to offer Docker machines (or similar) that staff and students could log into via URL from their personal desktop computers. These machines run on institutional servers, could, for example, host a multiplicity of Docker containers or cluster containers catering for an extensive array of disciplinary and trans-disciplinary subject matter that interfaces through a web browser. Ideally it could be mounted to each individual’s own file store so that items could be launched and saved back to the file store. Currently, using the Digital Ocean virtual server setup described above, it’s not possible to save work within the Jupyter Notebook running within the Docker container. Work done during a session is destroyed upon exiting the application unless it is downloaded in the specific file format (.ipynb) that can be re-uploaded to continue working on it the next time the containerised application is being used. 

Here’s a snapshot of Tony’s ‘showntell’ Github respository. Notice that he has branches with different Jupyter Notebooks for astronomy, chemistry, computing, electronics, engineering, linguistics, etc. Same application, multiple trans-disciplinary uses and users.

In a context where I am reading more and more about ownership of our own data and self-hosting as a possible means of reclaiming digital identity, Keith Harmon set a interesting and intriguing scene over on his blog:

As a complex system myself, I self-organize and endure only to the degree that I can sustain the flows of energy (think food) and information (think EL 3.0) through me. The cloud is primarily about flows of information, and the assumption I hear in Stephen’s discussion is that I, an individual, should be able to control that flow of information rather than some other person or group (say, Facebook) and that I should be able to control the flow of information both into and out of me. I find this idea of self-control, or self-organization, problematic—mostly because it is not absolute. As far as I know, only black holes absolutely command their own spaces, taking in whatever energy and information they like and giving out nothing (well, almost nothing—seems that even black holes may not be absolute).


Keith provides a deeper insight into his perspective and extends an invite to his readers to journey with him outside, where he likes to think about these kind of discussions.

It helps me to walk outside for discussions such as this, so come with me into my backyard for a moment. The day is cool and sunny, so I’m soaking in lots of energy from sunlight. I’ve had a great breakfast, so more energy. I’ve read all the posts about the cloud in the #el30 feed, so I have lots of information. Of course, I’m pulling in petabytes of data from my backyard, though I’m conscious of only a small bit. Even with the bright light, I can see only a sliver of the available bandwidth. I hear only a little of what is here, and I certainly don’t hear the cosmic background radiation, the echo of the big bang that is still resonating throughout the universe. I’m awash in energy and information. I always have been. Furthermore, I can absorb and process only a bit (pun intended) of the data and energy streams flowing around me, and very little of this absorption is my choice. Yes, if the Sun is too bright, I can go back inside, put on more clothing, or put on sunscreen, but really, what have I to do about the flow of energy from the Sun?


I’m going to take some time to mull over thoughts of being
truly able to control the flow of information both into and out of me. It’s an intriguing question.

If anyone is seeking to implement Jupyter Notebooks into their practice, Tony has authored a resource entitled “Getting Started With Jupyter Notebooks for Teaching and Learning”, which might be useful.

If you’d like to test-drive Jupyter for yourself you can do so at

If your HEI provides Microsoft O365, your credentials should be able to log you into notebooks.azure.com to run notebooks from.

Google also offer a notebook environment at colab.research.google.com which includes integrated storage within Drive.


#EL30 Week 1 – Data


Week 1 of #EL30 addressed the topic of Data. Within that, two core conceptual challenges relating to eLearning were explored, “first, the shift in our understanding of content from documents to data; and second, the shift in our understanding of data from centralized to decentralized.”

All of this exists within the backdrop of “what is now being called web3, the central role played by platforms is diminished in favour of direct interactions between peers, that is, a distributed web”. The topic of data is relatively new to me and I am figuring much of it out as I go.

Our data exists online across multiple distributed nodes and each of us embodies the unique identifier that links all of this data together. In Stephen’s week 1 data summary article he highlights how digital data is beginning to permeate many aspects of our lives – “We are beginning to see how we generate geographic data as we travel, economic data as we shop, and political data as we browse videos on YouTube and Tumbler. A piece of media isn’t just a piece of media any more: it’s what we did with it, who we shared it with, and what we created by accessing it.” The traces of data we leave behind of where we’ve been online creates a depiction of us for those that can see it, an online identity, from breadcrumbs in the digital woods.

Activity – Conversation with Shelly Blake-Plock

Week 1 conversation with Shelly Blake-Plock, Co-Founder, President and CEO of Yet Analytics

The week 1 conversation with Shelly Blake-Plock, Co-Founder, President and CEO of Yet Analytics covered a range of interesting topics. Discussion ebbed and flowed and touched upon concepts such as

  • using data in actionable ways to understand learners, to improve instruction and content and to manage data systems that support learning,
  • the Experience API (xAPI) specification,
  • the xAPI enterprise learning ecosystem,
  • Learning Record Store (LRS),
  • data ownership and management,
  • identity management applications,
  • the privacy trade-off of these systems.

There was good discussion around Experience API, commonly abbreviated to xAPI, a modern specification for learning technology that helps to turn learning activities, experiences and performance into data. Shelly was the Managing Editor of the IEEE Learning Technology Standards Committee Technical Advisory Group on xAPI (TAGxAPI) who created a technical implementation guide for xAPI.

Essentially, xAPI was created as a way of tracking learning experiences and performance that extends beyond the bounds of our traditional Learning Management Systems (LMS) and the content and activities that learners launch from within them. It facilitates an individual’s learning to be recorded and moved more freely from siloes such as the LMS, as long as it in xAPI format or can be converted to it. The notion is that learning occurs everywhere, it’s not simply confined to the LMS or to the classroom, and now it’s possible for the data generated from learners’ experience and performance (online and offline) to be tracked and sent via x API statements (signals) from a range of different origins such as mobile apps, simulations and games, and the physical world through wearable technology, sensors and online games.

With this data it becomes possible to analyse and understand how learners are learning and potentially improve the content and activities that they receive. xAPI statements about learning experiences can then be hooked up via a number of launch mechanisms to a Learning Record Store (LRS) to collect reams of data about how the learner interacts with their learning environments. Analysis of this data can be automated through machine learning algorithms depending on what type of information is being sought.

How xAPI works with the LRS


Most of us have likely become familiar with the term ‘surveillance capitalism’ as the purported business model employed by many web2 corporations and platforms. Online data generated by each of us (our digital footprint) is already bought and sold to online advertising and marketing agencies. We unwittingly and nonchalantly give our ‘consent’ to it by clicking agree to the terms and conditions of the seemingly ‘free’ online platforms and services we sign up to.

The ‘business model’ is explained early in this presentation by Laura Kalbag of Ind.ie:

Laura Kalbag speaks about indie design at WordCamp, London

When viewing all of this through a critical lens, talk about tracking and gathering learner data for analysis immediately brings with it the need to talk of a range of considerations around ownership, ethical use, privacy, security, and data governance. I’ve noted similar sentiments from many of my fellow #EL30 participants.

The use of learning analytics to support the student experience could afford valuable insights, but there are ethical implications associated with collection, analysis and reporting of data about learners.

According to Rebecca Ferguson (2012), “Learning analytics is “the measurement, collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs.”

JISC UK’s Code of practice for learning analytics, authored by Niall Sclater and Paul Bailey, provides very helpful guidance in this regard beneath 8 key headings identified to help institutions (and possibly other organisations) understand and carry out responsible, appropriate and effective analysis of the data that they gather:

  1. Responsibility
  2. Transparency and consent
  3. Privacy
  4. Validity
  5. Access
  6. Enabling positive interventions
  7. Minimising adverse impacts
  8. Stewardship of data

Niall Sclater also compiled a literature review of the ethical and legal issues for this code of practice, in which he collates some critical ethical questions from a diverse literature authorship in relation to many of the areas identified in the code of practice. Here’s a snapshot of some of the thought-provoking questions posed in that review:

Ethical questionsCode of Practice area
1Does the administration let the students/staff know their academic behaviours are being tracked? (Hoel et al., 2014)Responsibility
2Does an individual need to provide formal consent before data can be collected and/or analysed? (Campbell et al., 2010)Transparency and Consent
3How transparent are the algorithms that transform the data into analytics? (Reilly, 2013)Validity
4Who can mine our data for other purposes? (Slade & Galpin, 2012)Stewardship of data
5Who is responsible when a predictive analytic is incorrect? (Willis, Campbell & Pistilli, 2013)Privacy
6Does [a student profile] bias people’s expectation and behaviour? (Campbell et al., 2010)Minimising adverse impacts

On the #EL30 course I’ve read a bit about IndieWeb, a community based on the principles of owning your own domain and owning your own data. IndieWeb attempts to make it easy for everyone to take ownership of their online identity and believes that people should own what they create. https://opencollective.com/indieweb#about I definitely want to explore this further in light of the next generation of learning technologies.


#EL30 Introductory post

I’m probably a little late to the eLearning 3.0 MOOC (#EL30) party, nonetheless, I’m hoping to try to avail of this opportunity to learn from Stephen Downes’ MOOC and from the network of experienced people created from the Connectivist learning approach that he employs (more info. here and here). It’s already clear to see a diverse, energetic, knowledgeable network emerging in the course feeds area and I hope to contribute to this community where I can. At least this blog post should allow me to submit my feed for RSS harvesting!

An introductory article by Stephen entitled ‘Approaching E-Learning 3.0‘, had me immediately hooked:
“If you’re reading this, then this course is for you. You’ve demonstrated the main criterion: some degree of interest in the subject matter of the course.

I’m certainly interested, so let’s give it a go!

The focus of #EL30 will be to explore key domains that Stephen envisages within the next generation of distributed learning technology. The main topics being explored are laid out in this image.

#EL30 Topics
#EL30 Topics

In the presentation Stephen gave to launch #EL30, he rounds out the detail of each of these topics and considers the impact of the next wave of emerging and distributed learning technologies:

#EL30 launch presentation by Stephen Downes

By way of a quick introduction, I work as Technology Enhanced Learning Manager in Graduate & Professional Studies at the University of Limerick, Ireland. I’m involved in the design and production of flexible online and blended programmes and research of same, and on shaping related institutional structures and processes.

I’m interested in open and online learning, educational technology, instructional (learning experience) design, technology in general, and all associated literacies. I’ve been thinking about establishing my own web presence for some time and participating in this MOOC has given me the impetus to go and do it.

Already an abundance of topics have piqued my interest – linked data, web (re-)decentralisation (see SoLID, created by Sir Tim Berners-Lee), IndiewebgRSShopperWebmentionsthe Fediverse, RSS aggregation and harvesting, and many more.

I look forward to exploring and understanding them in greater depth.