Data Science, Data Analytics and Machine Learning Consulting in Koblenz Germany https://www.rene-pickhardt.de Extract knowledge from your data and be ahead of your competition Tue, 17 Jul 2018 12:12:43 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.6 Virtual Payment Channels for the Lightning Network – A Lightning Network Improvement Proposal draft https://www.rene-pickhardt.de/virtual-payment-channels-for-the-lightning-network-a-lightning-network-improvement-proposal-draft/ Tue, 17 Jul 2018 11:07:50 +0000 http://www.rene-pickhardt.de/?p=2131 I suggest to add another feature to the lightning network which I call a virtual payment channel (or in short VPC) to be described in this article. Such a virtual payment channel will NOT be backed by the blockchain and thus cannot operate in a trustless manner. However I see several use cases for virtual payment channels. The most important one seems to be the fact that two parties which agree to create a virtual payment channel can instantly create a channel at basically arbitrary capacity. This will help to increase the max flow properties of the network and allow for routing to become much easier. A virtual payment channel basically allows both channel partners to access the funds of the other channel partner which yields some flexibility and obviously a high risk in case of fraudulent behavior. While writing down the article I realized there are also some dangers to virtual payment channels. The main one is that to some degree VPCs resembles the fractional reserve banking system even though it is still only credit based on positive money (I think the English language should borrow the German word “Vollgeld-System“). At first I considered dropping the article because the last thing that we need is a cryptocurrency crises similar to the banking crisis that we have been through. However I think when regulated and used properly VPCs will add more benefit than danger to the lightning network. Despite the drawbacks with this article I will still propose this feature.

I came up with the concept of virtual payment channels since I finally started my work on a splicing protocol specification for BOLT1.1. On paper I have already laid out the data flow for asynchronous, non blocking splicing in and splicing out operations – which resemble pretty much the ones discussed on the github issue. I will write about them soon. Before digging deeper into laying out my specification I wanted to increase the usefulness of the splicing operations. So I started to wonder if there was a way that splicing could take place without the need to ever hit the blockchain? While playing around with some ideas I realized (though not proven) that splicing without hitting the blockchain seems impossible if the trustless property for payment channels is mandatory. Hence I asked myself what would happen if we dropped this central property? As a result I accidentally came up with the concept of virtual payment channels.

Having talked to larger exchanges and potential hubs of the lightning network in the past and having seen that they wonder a lot about rebalancing their funds and about how they could provide capacity without locking their liquidity uselessly I realized that virtual payment channels might be a solution to some of their core problems. During the article and at the end of this article I will provide use case stories from the perspective of a central node in the network as well as from the perspective of a mining pool.

Let us begin by looking at one of the two core elements of the lightning network: The (standard) payment channel. A standard payment channel operates in a trustless manner which allows two parties to create a payment channel without the need for them to trust each other. The ability to create trustless payment channels is certainly a very useful property that the lightning network must have. However this trustless property comes at a rather high price bringing us the following three drawbacks (which should still exist even if we upgrade payment channels to follow the eltoo protocol proposal):

  1. Even when Splicing is implemented channels need to lock the bitcoins that create the capacity for the channel. Meanwhile those bitcoins cannot be used differently making the lightning network and its topology somewhat rigid. (In particular if you take into consideration how much topology could at most change within one bitcoin block.)
  2. Creating (and closing) the channels (and splicing funds) are blockchain transactions which in general are slow and expensive. Slow because we have a block time of roughly ten minutes. Expensive because independently of the block size discussion there is a physical limit of how many transactions fit into a block. Remember that space in the block for a transaction is generally bought by providing mining fees.
  3. Bitcoins within the payment channel are in a hot wallet and it is difficult to secure them properly. Even if the output of the commitment transaction script is spendable only by a cold wallet address we have no guarantee that an attacker taking over the lightning node(s) might mitigate a different output for the next commitment transaction (for example if splicing out of funds is being implemented) or even easier the attacker could just make a payment to some address emptying the channel.

Additionally an indirect problem will be that companies like exchanges, large scale shops or payment provider or even banks will have many payment channels since they are likely to be central nodes in the network. The amount of channels might exceed the capacity of the hardware running this node. So people providing core financial infrastructure might need to operate several lightning nodes (let us call them A,B,C,…) to serve their customers. Companies then have to decide by collecting and evaluating data which customers to connect to which of their nodes. They also have to connect their own nodes via largely funded payment channels among each other. Even if done in some smart circle like topology (with channels so that each node can be reached in log N hops) it is not the preferable way since way too many funds are unnecessarily being locked to maintain clearing of funds between ones own nodes. Another drawback is that those additional funds are also on hot wallets again and need to be available in the first place.

Let us assume in the above setting some node N1 wants to make a payment to some node N2. The route for the payment happens to be:

N1 –> A –> B –> N2.

In the classical lightning network the routing could only take place if A could literally send over the requested money to node B within a payment channel between A and B. For brevity I omitted the exchange of the preimages and negotiating HTLCs and new commitment transactions.  Those details are defined in the protocol specification so that A and B don’t loose money while they route a payment from N1 to N2.

However if A and B are owned by the same entity (as assumed above) there would be no harm if A received some money and B forwards it without A forwarding the money to B. In particular a payment channel with Bitcoins being locked between A and B would be superfluous. This observation is exactly the idea behind a virtual payment channel.

A high level protocol for operating a virtual payment channel could look like this:

  1. Two nodes would agree on an amount both sides provide to the virtual channel. The money does not necessarily have to be backed somewhere (though having the money available is strongly advisable) and the initial balance also does not have to be symmetric.
  2. The channel balance is maintained by both sides as a balance sheet that needs to be cryptographically signed by both sides for every new update. Even though it will not be possible to enforce the final balance via the blockchain those balance sheets might be enforceable by real courts. (Which technically should never happen as VPCs should only be created if there is complete mutual trust between the channel partners)
  3. The contract between both sides includes the fact that the balance sheet with the highest number of mutually signed updates is the correct one and considered to be used for settlement. (I will repeat myself: Such a contract could only be enforced by a real court and not by the blockchain.)
  4. Channel creation is the first update on the balance sheet  and needs to be signed by both parties. In that way a party can decline to open a virtual payment channel by refusing to sign the balance sheet.
  5. The channel is then announced to the network as virtual payment channel via an extension of the gossip protocol. The extension is necessary because currently the blockchain funding transaction needs to be referenced as a proof of existence of the channel. The transaction is encoded to the short_channel_id field of the announcement_signature message as well as the channel_announcement message. From the perspective of the network it is not any different than the other channels. It can be used for routing payments since every hop is a bilateral contract. The rest of the network doesn’t have to care if that contract is actually enforceable by the blockchain or if there is some other trust provider between the channel partners. I literally see no risk in routing money through a third party virtual payment channel for the sender and recipient of the funds.
  6. The channel announcement should be done by both sides and should at least be signed with their own bitcoin keys. This will prevent spamming channels which can’t be used for routing since the partners do not agree that the channels exist. (c.f. point 4)
  7. Routing along a VPC takes place instead of an HTLC. Channel Partners will just update the channel balance. (in fact they could even resemble the HTLC behavior to their balance sheet but I don’t see the benefit)
  8. When closing the channel one party has to somehow reimburse the other side depending on the end balance in comparison to the start balance. This problem of settlement is the problem of the two partners and not part of the protocol. From a protocol perspective any channel partner can just announce the closing of the channel which takes effect immediately. This is precisely the moment where trust and IOU come into place.
  9. Closing of the payment channel also has to be announced via the gossip protocol.

Attention! If for some reason the other party turns out not to be trustworthy one could loose as many funds as the other side virtually provided. In the best case the two nodes are owned by the same person or company or there is some written agreement which backs the virtual payment channel. However obviously one cannot enforce such circumstances and prohibit arbitrary people to create a virtual payment channel. That is why virtual payment channels have to be used with extreme caution. As soon as you accept an inbound virtual payment channel from a third party you vouch with your own funds for the payments of your channel partner. Besides that there are a couple of other drawbacks I see with this proposal.

The 3 biggest drawbacks that I see are:

  1. A virtual payment channel can be seen as an IOU resembling to some degree the mechanics of a demand deposit. One side can provide funds that it doesn’t possess to a VPC and have the other side spend those funds. One cannot tackle this issue by implementing a rule that the network would not accept virtual payment channels if the sum of the amounts in the virtual payment channel is bigger than the sum of the capacities of the standard payment channels. The reason is that the network cannot prove after a virtual payment channel is being closed and reopened that settlement between both partners took place. Hence the potential channel capacity is really infinite. This does not mean that an infinite number of bitcoins can be created out of thin air.
  2. Infinite channel capacity is not a direct problem as in the worst case when routing takes place from A to B the standard channels of B might eventually be dried out in the sense that no further routing away from B can take place. However a network or circle of virtual payment channels could lead to weird chain reactions in which I pay someone and the money is only being routed through virtual payment channels creating literally a money flow out of thin air. I did not calculate this through but I guess at some point in time it will lead to imbalances with real channels but in the first place creating more virtual payment channels is not prohibited.
  3. Since the balance sheet is signed by both parties leaking of the key used for signing is like leaking keys of a traditional payment channel. So even in a virtual payment channel the bitcoins are somehow hot. One cannot directly steal bitcoins but one could initiate sending the virtual balance to nodes one controls and the virtual balance on the routes eventually becomes real balance. So again there need to be complete trust among channel partners.

Things to take into consideration:

  • It is always safe sending money over a virtual payment channel.
  • There is no risk to provide virtual balance in a virtual payment channel.
  • The risk is receiving money on a virtual payment channel because the recipient needs to be sure she will get settled later. This holds true in particular if receiving money was linked to a task like routing it further through standard channels or selling some goods. Both of which should be the standard use cases.
  • One could adopt routing fees that reflect those risks depending in which direction one uses the virtual payment channel or who provides it.

I will now describe a realistic setting which would benefit hugely from the chance to create virtual payment channels.

A use case for virtual payment channels: Payouts in mining pools.

Let us assume you are operating a mining pool. You want to do daily payouts for your – let us say 100k  – customers. Obviously your customers already trust you since they commit their hash power to your pool and trust on the fact that you will eventually do a payout.

The payout scenario without virtual payment channels and without the lightning network.

Every day one has to fit 100k transactions to the blockchain. With batching obviously one could use less transactions but still the amount of transactions and fees will be considerable.

If we have the lightning network the situation is not even getting better: 

For the payouts on day one we have to create payment channels. Obviously a mining pool cannot overfund those channels by too much since it would already need to have the liquidity to do so. So on day 2 the channels have to be either spliced in or the channel has to be closed and reopened. Unless of course the money was already being spent by the customer over the lightning network. (This however will only be possible if additional funds of the mining pool will be used to fund other external channels giving customers the possibility to spend their funds in a meaningful way). We see that using lightning in that case actually seems to increase the amount of blockchain transactions and increases the amount of liquidity that is being used.

Obviously the answer to the sketched problems could be that the mining pool does not have to operate channels to all its customers but can just rely on the lightning network to work in order to find routes to the customers while doing payouts. I predict that in many cases it won’t be possible for the pool to find a route to the customer when she creates an invoice and a channel has to be funded anyway.

Situation with virtual payment channels on the lightning network: 

The pool could accept a virtual payment channel with funds as high as the pool owns to the customer on the customers side of the VPC. This does not require a Blockchain transaction and saves fees. For the second payout on the next day the balance can just be increased by the amount the pool owes to the user. Obviously this kind of payout is not real as the customer does not directly control the funds. However the pool provides cryptographic proof and a signature to the customer which again cannot be enforced by the blockchain. The customer at any time can pay via lightning using exactly this virtual payment channel.

In order for this to be useful for the customer his virtual payment channel needs to be connected to the “real” lightning network. This is achieved by the mining pool which has to have several outgoing channels funded to rather central nodes on the network (e.g. shops, exchanges, payment provider,…). Since the pool can assume that not all customers will spend or transfer all their funds immediately the pool only has to provide a fraction of the funds allocated in the virtual payment channels as outgoing capacity to the other real channels. This reduces the amount of liquidity that needs to be on the hot lightning network node wallet. In case a channel dries out there will be funds spliced in or the channel is closed and reopened. This will happen way less frequently than payouts to all customers and thus reduces load on the blockchain. Also the customer can already receive bitcoins over the lightning network in this virtual payment channel (if he decides to trust even more funds to the virtual channel) In any case the customer can obviously also accept or demand payments on  a standard payment channel by disallowing inbound routing over the virtual payment channel.

Finally the customer could even get hold of his funds by fulfilling inbound routing requests on his standard payment channels which do outbound routing over the virtual payment channel.

For customers of the pool this mechanism probably also has other great advantages:

  • They could configure their software that spending and routing will first go through the VPC so that their money is quickly being spent.
  • They will probably be connected much better to many shops since the mining pool is maintaining and funding those real payment channels.
  • One could even think of a feature in which the pool could “splice out” money of a virtual payment channel by funding a standard payment channel for the user in case the pool can’t find a route to the recipient of the customer. The pool could obviously just itself open that channel in order to have a better connectivity to the lightning network in the future increasing the service for all its customers.

Conclusion:

We see that Virtual Payment Channels increase the flexibility for groups of users that for some reason trust each other. We can think of a virtual payment channel as an agreement to spend the funds of the channel partner in ones own name. Applications are for services that have plenty of customers and don’t want to reallocate funds all the time as well as for very central power nodes that cannot handle all of their traffic on one computer and need to distribute the node to several computers. However virtual payment channels should be used with large caution as one needs to trust another party. In the case of just distributing ones own node this will never be an issue. In case of the mining pool scenario trust already needed to exist in the first place so it should be ok but still yields a risk for the customer. The biggest downside is that a virtual payment channel can be seen as creating liquidity out of thin air and is kind of a credit that might break forcing chain reactions. However in this case it is just the channel partner who should have known better than trusting blindly. In particular if large operators go bankrupt as a follow up of such a chain reaction everyone who has standard payment channels with those operators won’t loose his or her funds due to the trustless property of the lightning network.

To the best of my knowledge the concept of virtual payment channels for the lightning network has not been proposed before. If I am mistaken I will be happy to receive a pointer. Now I am very excited to see what you as a community will think about my proposal. In particular I am excited since I understand that my proposal violets the core philosophy of the lightning network meaning that payment channels can operate in a trustless manner. Yet I think progress will only happen if creativity is allowed by thinking about all possibilities instead of fixed limitations. Also it will not change the trustless nature of the lightning network as this will only occur if two consenting parties agree to engage to such an endeavor. As outlined in this article they might have good reasons to do so. In the great tradition of Bitcoin Improvement Proposals (BIPs) I suggest that we start a process for Lightning Network Improvement Proposals (aka LIPs). This draft would need to be specified a little bit more if people think it would be a useful feature and could be such a LIP at the end.

Obviously if my proposal won’t make it to future BOLT specifications the idea is out and we cannot prevent companies like a mining pool operator still implementing this within their proprietary lightning supported wallet which they provide to their customers. However I think it would be much more preferable if virtual payment channels either become part of the protocol standard or are never used.

If you liked this article and are interested in the follow ups you can follow me on twitter.

 

 

]]>
Thoughts about eltoo: Another protocol for payment channel management in the lightning network https://www.rene-pickhardt.de/thoughts-about-eltoo-another-protocol-for-payment-channel-management-in-the-lightning-network/ https://www.rene-pickhardt.de/thoughts-about-eltoo-another-protocol-for-payment-channel-management-in-the-lightning-network/#comments Sun, 08 Jul 2018 21:56:08 +0000 http://www.rene-pickhardt.de/?p=2116 In this article I will give an overview about the proposed lightning network extension eltoo. There has been quite some buzz about the eltoo paper proposed by Christian Decker, Rusty Russell and Olaoluwa Osuntokun in May 2018. Bitcoin magazines and blogs have been making promising statements, that lightning will become even faster than it currently is. According to them eltoo allows for payment channels to interact less with bitcoins blockchain (which I will state already is just plain wrong.)

Having read Christian Deckers main publication for his dissertation in which the authors suggest the use of invalidation trees to invalidate old states of an payment channel I was confident that eltoo would be a really nice improvement for the lightning network protocol. However following the “Don’t trust. Verify!” philosophy of the Bitcoin community I obviously had to study the eltoo protocol proposal myself. In particular I did not believe those bullish news from the magazines since from a technical perspective I do not see how the speed of the lightning network can significantly be increased. The speed of the lightning network at its current state is basically bound by the speed of TCP/IP sockets at the payment channel level and during routing by the time locks of the HTLCs. Since eltoo is only looking at channel management we can obviously omit the later case. As I am starting to work on the splicing part of the BOLT 1.1 specification which is a protocol extension of payment channel management obviously I need to understand eltoo anyway.

Yesterday after holding an openly licensed German language lecture on the basics of the lightning network technology  (not to be confused with BOLT the Lightning network RFC) I had a 5 hour train ride home from the event and finally found the time to carefully study the eltoo paper. Even though the paper is very well structured and written and I thought it might be helpful if I share my understanding of the proposed protocol back with the community. However my main reason is that I hope by writing down and explaining the paper I get an even clearer understanding myself. In case I misunderstood something about the eltoo paper please let me know so that I can update this article. Also there are some rather technical questions I still have (for example this one which I have already posted on bitcoin stack exchange)

So let us analyze eltoo by understanding the motivation for yet another payment channel protocol by reviewing some of the well known facts about the current payment channel management.

Unpublished commitment Transactions encode the balance (or more technically spoken state) of a payment channel in the lightning network.

For the current version of the lightning network payment channel protocol we need to store information about all the old states of a channel. With state of a channel we mean the current distribution of the channels capacity (aka the balance) among the channel partners. At the core the current payment channel (and also the solution proposed in eltoo) a payment channel is just a 2-2 multi signature wallet. Currently the balance of a payment channel is encoded by two unpublished transactions that would spend the channels capacity according to the balance sheet with the channel partners as receipients. Actually we call those transactions commitment transactions. These transactions are both signed by both channel partners and each of them has one of them stored on his lightning node. This creates some kind of information asymmetry and a risk of funds being stolen should the information leak to the channel partner.

Funds within the channel are being maintained by creating a new commitment transaction (kind of a double spend) that spends the funds of the multi signature wallet (the channels capacity) in a different way. In order for both parties to be able to trust in the newly agreed upon channel balance we obviously have the problem of invalidating the old commitment transactions. In a trustless setting (which we wish to achieve) channel partners cannot rely on their counterparts to delete the old commitment transaction once they agree on a new channel balance. The reason is that those commitment transaction are valid bitcoin transactions signed by both channel partners and both partners own a copy and could have theoretically made more copies in between time.

All currently known protocols for maintaining a full duplex payment channel have in common that old channel states need to be invalidated. 

The current implementation of the lightning network achieves the goal of invalidating old channel states by including the hash of a revocation key within the output script of a commitment transaction. The revocation key itself can only be known by the channel partner that does not control a signed copy of that transaction. (Again this is where the information asymmetry kicks in) The output script of the commitment transaction will eventually spend the funds according to the channel balance that was agreed upon for this particular commitment transaction. However it allows to spend the output of a published commitment transaction directly to send the entire capacity of the channel to ones own bitcoin address once one can present the signature from the secrete revocation key. If channel partners wish to update to a new channel state they will collaboratively compute the revocation resulting in the hash that is stored in the most current commitment transaction. The important requirement of the protocol design is that neither partner can compute the revocation key without the help of the other partner. Owning the revocation key prevents the channel partner to publish an old commitment transaction (aka state) since the publishing side would be punished by the other side releasing the revocation key and spending the outputs of the commitment transaction directly. Note that releasing the revocation key itself is not dangerous since it can only be useful in the output script of the transaction which includes the hash of the revocation key. As you would imagine each new state / commitment transaction includes the hash of a new revocation key. Therefor it is really the old transaction that needs to be kept secrete or even better destroyed. Obviously with such a setup both sides need to store all old revocation keys in order to be able to protect oneself from the other side publishing an old channel state (which for example could be a state in which all the funds belonged to the side that tries to publish this state)

Eltoo suggest a different protocol for channel management such that either side does not need to store all old state but just the current state. This is supposed to simplify implementing lightning nodes and also running them. According to Christian Decker this will also simplify implementing features like splicing. (On a side note I have not thought the splicing operation completely through for eltoo but it seems obvious to me that the chain of update transactions in eltoo can easily be extended by at least splice out operations.)

The initial setup of a payment channel within the Eltoo protocol is in my opinion exactly the same as in the current specification of the lightning network.
A payment channel is created by funding a 2-2 multi signature wallet. The state of the channel is again encoded within a transaction that is just not published yet. Only the process of invalidating old state differs. Both channel parties create two sets of public and private keys. The first set is called the update keys and the second set is called the settlement keys. Actually the settlement keys should be derived from a deterministic hierarchical wallet so that settlement transactions can only be bound to one specific update transaction (but I guess that is a rather subtle though important detail). According to Christian Decker what is new is that

… commitment is no longer a single transaction customized for each side, but an update/commitment that is symmetric and a settlement built on top.

A chain of update and settlement transactions allows for encoding channel state.

The state of the channel is encoded as the most recent element of a sequence of settlement transactions in which the current settlement transaction is encoding the current channel state. All the old settlement transactions are not needed and can safely be thrown away. Though not being the same as commitment transactions in the current lightning network implementations the settlement transactions can probably be considered to be the analogous counterpart. Each settlement transaction spends an – so called – update transaction.  The update transactions are formed as a “linked list” starting from the funding transaction (which could already be interpreted as an update transaction). The update transactions spend the output of the funding transaction (or the previous update transaction(s!) ). The output of the new update transaction is a two path multisignature script where in the first path the output can be directly spent with the pair of update keys. In the second path the output can be spent with the settlement keys as long as a timelock is achieved. This will always give priority to successfully double spend an old settlement transaction with a newer update transaction. In case the newest settlement transaction is being broadcast there exist no newer update transaction which is why the double spend of another update transaction cannot be achieved by a single entity in the channel. Therefor after the timelock is over the channel is closed.

This setup is bad however since the current channel balance encoded in an settlement transaction which is the child of the latest update transaction which is the child of the previous update transaction and so on. Therefor in order to close the channel one would need to broadcast the complete chain of update transactions. This would be bad as it would not take any load from the blockchain in comparison to directly transfer bitcoins every time one wold want to update a payment channel and we would also need to store all old update transactions not really decreasing the overhead. However with the current version of the bitcoin protcol there is nothing we can do about this and eltoo is nothing but a nice thought and completely impractical. However a minor update to bitcoin (which was actually already talked about in the current lightning network paper) is finally proposed as BIP 118 and would also make eltoo useful.

Eltoo achieves the simplification by introducing the idea of floating transactions which are realized by introducing a modifier SIGHASH_NOINPUT for the existing OP_CODES check{,multi}sig{,verify}

With SIGHASH_NOINPUT the transaction that is being created to spend some output will ignore the prevout_point (tx hash, output  index), sequences, and scriptCode for its signature calculation. This allows to change the output that is being spent as long as the output script matches the input script. Since the output scripts of the updating transactions are all the same (at least for the path that spends to output again to the update keys of the 2-2 multisig wallet) one can bind a later update transaction to the parents or grandparents or great grandparents or … of the most recent output transaction. (I don’t see the point of summarizing all the technical details here since they are very well described in the paper and I don’t see how I would formulate them differently or shorter) What this does however is that instead of the need to publish all previous funding transactions the current one can just be bound to the funding transaction and can be broadcasted together with its settlement transaction. Since there is a timelock in the update transaction (it could be an invalid one) the settlementtransaction should obviously only be broadcasted after the timeout since it will be rejected by the bitcoin network beforehand.

That being said the current channel state is defined by the already mined funding transaction as well as the most recent update transaction and its child the settlement transaction. In case channel partner Mallory would want to publish and older state (encoded within some settlement transaction) she would first need to publish the update transaction that is parent of this very settlement transaction. She can do so as she can also bind that corresponding update transaction to the funding transaction. However if she now spends the settlement transaction there is the path in the output script of the update transaction that allows for direct spending of a newer update transaction (which can be the newest update transaction -known by the other side – being bound to that particular update transaction). This will prevent old state from entering the blockchain.

There are some minor details about including a trigger transaction which I will omit here. However it seems important to mention that this mechanism also requires for the other side to be online or having a watchtower mechanism being included. Also with this mechanism in the uncollaborative closing case, the collaborative one and the case where one party tries to steal funds the blockchain will need to have at least three transactions being broadcasted to the blockchain (the trigger transaction, the most recent update transaction and the settlement transaction) In that sense I understand eltoo as a tradeoff between convenience for channel management in comparison to minimal blockchain involvement. As I mentioned during the autopilot discussion at the lightning hackday the blockchain will still need about 10 years to have one payment channel open for every internet user in the world. On the other side it seems to have many advantages over the current protocol for payment channel management.

Advantages of eltoo:

  • Biggest pro seems to be that it is easily compatible with the current lightning network. It does not matter of the network uses various protocols for payment channel management.
  • As fraudulent behavior can also exist by accident (e.g. a bug in the implementation) it is nice that with eltoo the punishment of fraudulent behavior is not achieved by giving all bitcoins to the honest side but rather by forcing a channel close and distributing the funds according to the last agreed upon channel balance
  • Way less protocol overhead on the implementation side (only true if we eventually only support eltoo channels and don’t have to support both payment channels)
  • (Although I did not explain the details here) Fees for settling (closing) the channel do not need to be decided for while creating the payment channel but can be chosen a posteriori.
  • Possibility to easily create multi user payment channels (though all parties have to be online in order to update the channel balance which is said because otherwise we could easily create payment caves in wich thousands of participants would share the balance increasing liquidity in the network) also the transaction size increases (but still stays smaller than having several 2 party channels created among the participants). Christian Decker pointed out that with Schnorr Signatures there won’t be a difference in the size of the transactions for 2 party channels or n party channels. I see that the signature part of the transaction won’t grow but if more parties are on the channel more outputs are being created. (Maybe I miss something. I haven’t had the chance to look at Schnorr Signatures in detail.)
  • Since honest nodes can safely forget old channel state we should never run in memory issues while maintaining one channel.

disadvantages of eltoo:

  • It might incentivize people to try behave in a fraudulent way since there is only the risk of loosing fees involved.
  • SIGHASH_NOINPUT needs to be implemented in future bitcoin versions. Though the authors claim the protocol is backward compatible with bitcoin for the lightning network this seems like a typical IT mess. Probably it will take time for bitcoin and lightning network nodes to update. Obviously a routing network can be created with various channel management protcols as long as they support HTLCs and the internal API for routing. However it seems quite cumbersome to implement future features to support both eltoo and the classical lightning payment channels. One strategy could be to just implement splicing on top of eltoo or to implement it with a higher abstraction of channel management (not sure yet if that is possible)

Open Questions:

  • What happens if we have an Integer overflow in the sequence number?

Which Christian Decker answered by stating that in case this really happens one can set back the counter by splicing or reanchoring (splice without changing the values).

If you are more interested in technical write ups about the technology of the lightning network you can follow me on twitter. If you are also an enthusiast of the lightning network you could connect to my lightning node via: 036f464b54416ea583dcfae3872d28516dbe85414ed838513b1c34fb3a4aee4e7a@144.76.235.20:9735 and help to grow the lightning network. Also I want to shout out a thanks to Christian Decker who has reviewed this article and pointed out some mistakes which have been fixed.

]]>
https://www.rene-pickhardt.de/thoughts-about-eltoo-another-protocol-for-payment-channel-management-in-the-lightning-network/feed/ 1
Improve the autopilot of bitcoin’s lightning network (Summary of the bar camp Session at the 2nd lightninghackday in Berlin) https://www.rene-pickhardt.de/improve-the-autopilot-of-bitcoins-lightning-network-summary-of-the-bar-camp-session-at-the-2nd-lightninghackday-in-berlin/ https://www.rene-pickhardt.de/improve-the-autopilot-of-bitcoins-lightning-network-summary-of-the-bar-camp-session-at-the-2nd-lightninghackday-in-berlin/#comments Sun, 24 Jun 2018 22:02:55 +0000 http://www.rene-pickhardt.de/?p=2085

I have been visiting Berlin to attend the second lightninghackday and want to give a brief wrap up about the event. This article will basically cover two topics. 1st as promised within my bar camp session on “Building an automated topology for autopilot features of the lightning network nodes” I will give an extensive protocol / summary of the session itself. 2nd I will talk about an already well known technique called splicing which I realized during the event might be one of the more important yet unimplemented features of lightning nodes.

Let me just make a quick shoutout to Jeff and his team from fulmo lightning: Thanks for organizing this excellent event. Especially bringing together such a group of high profile lightning network enthusiasts was astonishing for me. Out of the many tech events and conferences that I have attended in the past this was one of my top experiences.

In my bar camp session we had roughly 30 people attending. Luckily also Conner Fromknecht and Olaoluwa Osuntokun from the San Francisco based lightning labs joined the session and gave many insights resulting from their hands on experience. I started the session with a short presentation of my thoughts about the problem. I had previously formulated those in my github repo as a rough draft / scetch for a whitepaper on the topic. That happened after I opened Issue 677 in the LND Project criticizing the current design choices for the autopilot. The main points of those thoughts are a list of metrics and criteria I previously thought I would want to monitor and optimize in order to find a good network topology. Before the hackday that list looked like that. (We discussed this list and basically people agreed on it):

  • Diameter: A small diameter produces short paths for onion routing. Short paths are preferable because failiure is less likely to happen if less nodes are involved for routing a payment.
  • Channel balance: Channels should be properly funded but also the funds should be balanced to some degree.
  • Connectivity / Redundancy: Removing nodes (in particular strongly connected nodes) should not be a problem for the remaining nodes / connectivity of the network.
  • Uptime: It seems obvious that nodes with a high uptime are better candidates to open a channel to.
  • Blockchain Transactions: Realizing that the Blockchain only supports around 300k Transactions per day the opening, closing and updating of channels should be minimized.
  • Fees for routing: Maybe opening a channel (which is cost intensive) is cheaper overall.
  • Bandwith, Latency,…: nodes can only process a certain amount of routing requests. I assume that also the HTLCs will lock channels for a certain amount of time during onion routing.
  • Internet topology: obviously routing through the network becomes faster if the P2P network has a similar topology as the underlying physical network. Also it makes sense since even on the internet people might most of the time use products and services within their geographic region. check assumptions

Before I state the new ideas that came from the attendees and the discussion I want to briefly sum up the great keynote by the guys from lightning labs that preceded the bar camp session. In particular I think the insights and techniques (Atomic Multi Path routing and Splicing) they talked about have a huge impact on the autopilot and the topology generation problems of the lightning network. Long story short the magic concept in my opinion is splicing. For those that are unfamiliar with the topic: Splicing is the process of updating the channel balance of a payment channel by adding or removing (partial) funds. In the past I always thought that even though it was not implemented in any of the lightning nodes that this is a problem which technically is rather trivial to solve and thus of minor importance. The guys from lightning labs basically stated the same emphasizing that splicing funds out of a channel is trivial and can even be achieved easily in a non blocking way such that the channel can be used again directly after splicing out even if the spent transaction is not yet mined. However splicing in (aka adding more funds to a channel) seems to be a little bit more cumbersome. Without going into too many technical details the real eyeopener for me was the fact that splicing (together with the autopilot) seem to make lightning wallets in the way they exist these days obsolete. This obviously is a game changer. So to make it crystal clear:

With splicing and the autopilot implemented in a standard bitcoin wallet (and running in the background without the need for users to be aware of this) users can efficiently, quickly and cheaply send funds from one to another. If a path via lightning exists the transaction will be lightning fast. If no path can be found one could just splice out some funds from existing channels to create a new channel for the transaction. This would basically boil down to a common on chain transaction which happened before we had the lightning network anyway. However it doesn’t matter if all funds are currently frozen up in payment channels. Also it doesn’t waste the blockchain transaction but rather uses this opportunity to open create a funding transaction for the next channel increasing the size of the lightning network. I dare to say a bigger lightning network is a better lightning network in general. Basically the goal would be that eventually all bitcoin funds would be locked in some payment channels (which with splicing obviously doesn’t lower the control or flexibility of the user) In case a user really needs to do a transaction wich can’t be achieved via lightning it will just be spliced out and takes as much processing time as a regular on chain transaction. As a disclaimer: Obviously watchtowers are still needed in particular in this scenario in which users might not even realize they are using the lightning network.

Taking the opportunities of splicing into consideration I think that the topology problem of the autopilot becomes issue of only minor importance. One can easily splice out from existing payment channels to new payment channels if needed. The only bad thing is that such a transaction is not routed at lightning speed but rather takes a couple block times to be mined and processed. However it eventually creates a user generated network topology that hopefully pretty much follows actual flows of funds and would thus be rather robust. The only drawback with such a process would be that transactions frequently include channel creations which takes some time and that only a maximum of 300k channels can be altered per day on top of the current bitcoin protocol. This observation explains why topology generation of the autopilot still is a reasonable topic to think about since it should still help to move traffic away from the blockchain.

Finally I will now list some new thoughts that have been collected during the session. I will also update my whitepaper soon. Feel free to fork me on github and do a pull request in order to fix mistakes or add your own thoughts.

Number of nodes reachable within x hops:
It was pointed out that his metric would look quite nice. As a result of the discussion we came to realize that this greedy heuristic would basically lead to the scenario in which every node would open a channel to the most central node in the channel. such a central node would increase the number of nodes that can be reached within x hopes by the maximum amount. Still it looks like an important number to somehow optimize for.

Honesty and well behavior of nodes:
Following a similar philosophy we discussed weather a distributed topology creation algorithm should aim for global health of the network in comparison for greedy strategies in which every node tries to optimize their own view of the network. Though it wasn’t pointed out in the session I think that a strategy where every node tries to optimize their own access to the network will at some point yield a Nash equilibrium which. With my rather little understanding of game theory I think this might not necessarily be the best solution from a global perspective. Also we discussed that in the later mentioned sections where clients share information with neighbors an algorithm must be robust against fraudulent behavior or lying of nodes.

Stakeholder models:
Pretty much everyone agreed right away that different nodes might have different needs for the lightning network. so the topology creation should be configurable (or learnable by the node) taking into respect weather the node is just a casual consumer or a shop or a bank / exchange or …

Privacy vs information sharing:
We discussed quite extensively that for a distributed algorithm to make predictions which channels should be created it would be great if channel balances would be public (or at least there would be some rough information available about the distribution of the balance within one channel). We realized that as a first step following the ideas of lnd Issue 1253 nodes should start collecting historic information about their own operations and routing acticities. Actually I just saw that a pull request that claims to have resolved issue 1253 already exists.
We also realized that channel fees might act as a reasonable well proxy for the channel balance. Assume Alice and Bob have a channel and the balance is very skew in the sense that Alice has almost no funds and Bob has all of them. If Bob was asked to route a payment through that channel he would probably charge a smaller fee than Alice if she was asked to route a payment through her almost dried up channel.

Still the idea circulated around that nodes could share their channel balances with neighbors in a similar fashion how routing information in the IP network are shared with neighbors. In such a way eventually a map of paths would be constructed for each node.

A point mentioned – that in my oppionion is important but only somewhat related to these issues – was the fact that of course nodes should take higher routing fees if the timelock of the routing request is high since in the worst case this makes a channel or many other paths unusable for quite some time maybe even without a payment taking place. As a side node I just realize that this could be a more sophisticated strategy for nodes to estimate their fees if they are aware of the number of routing paths their channel makes possible.

Some technical details:
One could use the number of disjoint paths between nodes as a good criteria since it also enables heavy use of atomic multi path transactions. Also it was mentioned that one could look at the AS-number of the underlying internet hosts.

Why not using machine Learning instead of trying to find some smart algorithm / heuristics?
For me the most surprising idea was the fact that this entire autopilot problem could easily be transferd into a machine learning problem. There are some obvious flaws because single nodes might not have enough training data and if one has enough data sharing the model would probably also not work out of the box. So we discussed a little bit if that would be a use case for transfer learning. Here I will not dig deeper into the topic, since the article is already quite long. But working in the field of machine learning and being a data scientist and having not even put the slightest thought about this idea before the event took place was a little bit shocking and surprising for me.

Anyway I hope my summary of the session will be useful for you and the community! My personal roadmap now consists of four things.

  1. I am thinking to add a splicing protocol specification to the BOLT (lightning-rfc)
  2. I want to get running with go-lang and the codebase of lnd in order to be able to do some hands on experiments.
  3. I plan to update the very rough draft of the white paper.
  4. Finally I will hopefully find the time to hack a litte python script that does a simulation of how my above described splicing strategy would create a lightning network wich is able to route most payment requests. If you want to be update just follow me on twitter where I will inform you once I am done. Also feel free to leave a comment with more ideas or extend the draft of the white paper. I would love join forces working on this topic.

Also kudos to Susette Bader who took this really nice snapshot and cover image of this post while we have been hacking.

]]>
https://www.rene-pickhardt.de/improve-the-autopilot-of-bitcoins-lightning-network-summary-of-the-bar-camp-session-at-the-2nd-lightninghackday-in-berlin/feed/ 2
Extracting 2 social network graphs from the Democratic National Committee Email Corpus on Wikileaks https://www.rene-pickhardt.de/extracting-2-social-network-graphs-from-the-democratic-national-committee-email-corpus-on-wikileaks/ Thu, 28 Jul 2016 12:15:05 +0000 http://www.rene-pickhardt.de/?p=1989 tl,dr verion: Source code at github!
A couple of days ago a data set was released on Wikileaks consisting of about 23 thousand emails sent within the Democratic National Committee that would demonstrate how the DNC was actively trying to prevent Bernie Sanders from being the democratic candidate for the General public election. I am interested in who are the people with a lot of influence so I decided to have a closer look at the data.
Yesterday I crawled the dataset and processed it. I extracted two graphs in the Konect format. Since I am not sure if I am legally allowed to publish the processed data sets I will only link to the source code so you can generate the data sets yourself, if you don’t know how to run the code but need the information drop me a mail. I Also hope that Jérôme Kunegis will do an analysis of the networks and include them to Konect.

First we have the temporal graph

This graph consists of 39338 edges. There is a directed edge for each email sent from one person to another person and a timestamp when this has happened. If a person puts n recipients in CC there will be n edges added to the graph.

rpickhardt$ wc -l temporalGraph.tsv
39338 temporalGraph.tsv
rpickhardt$ head -5 temporalGraph.tsv
GardeM@dnc.org DavisM@dnc.org 1 17 May 2016 19:51:22
ShapiroA@dnc.org KaplanJ@dnc.org 1 4 May 2016 06:58:23
JacquelynLopez@perkinscoie.com EMail-Vetting_D@dnc.org 1 13 May 2016 21:27:16
JacquelynLopez@perkinscoie.com LykinsT@dnc.org 1 13 May 2016 21:27:16
JacquelynLopez@perkinscoie.com ReifE@dnc.org 1 13 May 2016 21:27:16

clearly the format is: sender TAB receiver TAB 1 TAB date
The data is currently not sorted by the fourth column but this can easily be done. Clearly an email network is directed and can have multi edges.

Second we have the weighted co-recipient network

Looking at the data I have discovered that many mails have more than one recipient so I thought it would be nice to see the social network structure by looking at how often two people occur in the recipient list for an email. This can reveal a lot about the social network structure of the DNC.

rpickhardt$ wc -l weightedCCGraph.tsv
20864 weightedCCGraph.tsv
rpickhardt$ head -5 weightedCCGraph.tsv
PaustenbachM@dnc.org MirandaL@dnc.org 848
MirandaL@dnc.org PaustenbachM@dnc.org 848
WalkerE@dnc.org PaustenbachM@dnc.org 624
PaustenbachM@dnc.org WalkerE@dnc.org 624
WalkerE@dnc.org MirandaL@dnc.org 596

clearly the format is: recipient1 TAB recipient2 TAB count
where count counts how ofthen recipient1 and recipient2 have been together in mails
 

Simple statistics

There have been

  • 1226 senders
  • 1384 recipients
  • 2030 people

included in the mails. In total I found 1226 different senders and 1384 different receivers. The top 7 Senders are:

MirandaL@dnc.org 1482
ComerS@dnc.org 1449
ParrishD@dnc.org 750
DNCPress@dnc.org 745
PaustenbachM@dnc.org 608
KaplanJ@dnc.org 600
ManriquezP@dnc.org 567

And the top 7 recievers are:

MirandaL@dnc.org 2951
Comm_D@dnc.org 2439
ComerS@dnc.org 1841
PaustenbachM@dnc.org 1550
KaplanJ@dnc.org 1457
WalkerE@dnc.org 1110
kaplanj@dnc.org 987

As you can see kaplanj@dnc.org and KaplanJ@dnc.org occur in the data set so as I mention in the Roadmap section at the end of the article more clean up of data might be necessary to get a more precise picture.
Still on a first glimse the data looks pretty natural. In the following I provide a diagram showing the rank frequency plot of senders and recievers. One can see that some people are way more active then other people. Also the recipient curve is above the sender curve which makes sense since every mail has one sender but at least 1 reciepient.

Also you can see the rank co-occurence count diagram of the co-occurence network. This when the ranks are above 2000 the standard network structure picture changes a little bit. I have no plausible explaination for this. Maybe this is due to the fact that the data dump is not complete. Still I find the data looks pretty natrual to me so further investigations might make sense.

Code

The crawler code is a two-liner. just some wget and sleep magic
The python code for processing the mails builds upon the python email library by Alain Spineux which is released under the LGPL license. My Code on top is released under GPLv3 and can be found on github.

Roadmap

  • Use the Generalized Language Model Toolkit to build Language Models on the data
  • Compare with the social graph from twitter – many email addresses or at least names will be linked to twitter accounts. Comparing the Twitter network with the email network might reveal the differences in internal and external communication
  • Improve Quality of data i.e. better clean up of the data. Sometimes people in the recipient list have more than one email address. Currently they are treated as two different people. On the other hand sometimes mail addresses are missing and just names are included. These could probably be inferred from the other mail addresses. Also names in this case serve as uniq identifiers. So if two different people are called ‘Bob’ they become one person in the dataset. 
]]>
Cleaning up my network connections on ubuntu linux using the network manager nmcli https://www.rene-pickhardt.de/cleaning-up-my-network-connections-on-ubuntu-linux-using-the-network-manager-nmcli/ https://www.rene-pickhardt.de/cleaning-up-my-network-connections-on-ubuntu-linux-using-the-network-manager-nmcli/#respond Thu, 16 Apr 2015 19:03:34 +0000 http://www.rene-pickhardt.de/?p=1955 After 4 years of running a pretty stable linux on my notebook I realized that the time had come that too much software and dependencies have been on my system so I set up a clean system. By doing so I also switched to a tiling window manager called awesome (with which I am pretty happy so far). One problem (not really since it is the purpose of going to awesome) though is that everything has to be done from the command line. in particular when I join a network I have to use the network manager command line interface nmcli to set up my wireless connection or my wired connection. That is not too much of a problem but since I did a backup of all my old network connections I had quite a list of connections too look for in order to find the uuid of the network I wanted to join and enable the network with the suitable command. So I wanted to delete all the connections from hotels, airports and places that I am not visiting anymore. Obviously I could have done this by hand. But its much more fun to do it automatically and on the long term it is really faster when you are mastering your bash tools (:
So here we go with a step by step explanation of the following command that will remove all your network connections that you have never used:

$ for i in `nmcli c | grep "never" | grep -o -- "[0-9a-fA-F]\{8\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{12\}"` ; do nmcli connection delete uuid $i ; done

ok let us first see the connections that I had in my manager

$ nmcli connection

which would give me the following list

NAME UUID TYPE TIMESTAMP-REAL
Lobby 0f91bb0d-e2be-4f8a-a00e-457c8bdaf9a9 802-11-wireless never
TTH-Zentral 2d3ddca9-772f-47d9-99a1-1559640b0f25 802-11-wireless never
attwifi 6be3c5aa-fc85-415c-96e4-5583b25c23bb 802-11-wireless never
uni-koblenz 8a0d1d51-672c-4ded-ad92-18e27b8215df 802-11-wireless Do 16 Apr 2015 18:11:45 CEST
greenscafe 22dd0f8e-be2d-4402-af82-7716222add75 802-11-wireless never
WLAN-2DD138 5308ed9e-5def-430e-85f4-e1eb01426927 802-11-wireless never
Fairfield_GUEST b72080e0-c5dd-492a-b8a3-017fe7d6099c 802-11-wireless never
Hotel Sylter Hof d7c8fc6c-64e3-435f-b2c0-316c1f11ddf1 802-11-wireless never
TELENETHOTSPOT 1345bcda-26ab-4193-904c-088235d29873 802-11-wireless never
CHI_ECRC_2013 a316f6be-bb4c-4501-867b-6928f9f429ef 802-11-wireless never
AndroidAP5270 b5c5462c-072e-4e90-a3b2-def467562579 802-11-wireless never
hotel_harvey 7c6d1794-6d45-40aa-b7ac-f9a7f401d02e 802-11-wireless never
free-hotspot.com dc7351ce-ca6c-413e-ae8b-a82c2093525e 802-11-wireless never
PAT-WLAN 4662b075-9519-4f69-8ce9-b938ea8270c1 802-11-wireless never
dlink 5dab3493-cdd8-4d6c-a7a2-6def0836c2a5 802-11-wireless never
TTHKasse b9c2f9d9-aeda-454b-941b-bb1ccfc0ad9a 802-11-wireless never
BTOpenzone-B bede9d3a-9fc1-4dc2-9c75-0d2686c225f1 802-11-wireless never
MBTA_WIFI_Car0385_Box-068 17ac549c-574c-4bf4-a464-6b633f2a4ac3 802-11-wireless never
Wireless connection 1 17b18b79-411d-4d69-813d-6506f38b8ea5 802-11-wireless never
MBTA_WiFi_Car0620_Box-199 c36c572e-e469-4e86-a50f-87d6d68f4f7e 802-11-wireless never
*VIPARIS_WIFI 0593b927-644b-4604-a479-4fc652b6050d 802-11-wireless never
test 836af61a-8268-4a16-a5ab-036f3e0cf7e9 802-11-wireless So 15 Sep 2013 21:18:35 CEST
WL1A 46b6496f-6583-4306-a58f-4e4d2bdd5d50 802-11-wireless never
Hochhaus f6b80571-37d5-4bcd-b558-62084c61622b 802-11-wireless never
FreeWifi 379f9efd-b91d-433f-8ce2-209e4ed2099b 802-11-wireless never
NETCONNECT-6202 2f84b475-a852-49ac-a236-61b3f0bc9548 802-11-wireless never
stolteraa d789f49e-2643-4894-96a8-4e10ea3cbb69 802-11-wireless never
ALICE-WLAN28 17e4838f-bbaf-470c-bf24-9de948b42b00 802-11-wireless never
NAS QNAP ef6a08df-1c7c-48ba-b476-f3ff3b1e2669 802-3-ethernet Mi 15 Apr 2015 11:03:51 CEST
WLAN-8CA902 3046307e-69a3-4c99-89fb-b79f7f9ee24b 802-11-wireless Mi 01 Apr 2015 11:42:27 CEST
MfN-Guest 25d3765f-dcca-4f40-872a-8ffcfe307f25 802-11-wireless never
FRITZ!Box Fon WLAN 7360 7c428f50-078d-442a-9ba7-268853c888f0 802-11-wireless Mo 11 Feb 2013 23:39:08 CET
Arcor-362007-L 87a2d11c-3a3d-4b5e-a1b0-cc5b38916035 802-11-wireless Di 14 Apr 2015 17:42:33 CEST
Telekom_ICE 8763adcc-f27b-441f-9184-594128871351 802-11-wireless So 29 Mär 2015 19:56:35 CEST
IBMVISITOR 48227bb6-63e8-4a58-8b8f-1929f76d8ef2 802-11-wireless never
OWL 13ff2ed5-8ecf-4d67-b039-a3bc4cce56d9 802-11-wireless never
Parkhotel Kraehennest e763a256-4a2d-4813-b354-57ad8f642e2a 802-11-wireless never
EasyBox-AD3112 e4f342ae-d20b-4abc-9fe2-52b208d6b61b 802-11-wireless never
BTOpenzone 860cf4c0-45eb-4a57-a01e-cfd54933387f 802-11-wireless never
southshore 274cfaa8-2c08-4abc-bef3-3fd2608060b8 802-11-wireless never
annanet 06b3b3bc-a264-4d05-a848-3b5d6df9eed0 802-11-wireless never
shared 806c9806-fadc-472f-8460-318babbd38ea 802-3-ethernet Fr 10 Apr 2015 09:39:50 CEST
vidiu 3abf0848-b09c-457e-9c7a-958728f2c59f 802-11-wireless never
wlan 1 ad9e4a07-b66f-4f3f-8f6d-4f5a81b2ed54 802-11-wireless never
CITYHOTEL dec05243-06b6-4477-8813-0491b436f993 802-11-wireless never
eduroam f844713b-a726-4b5b-98a5-3b35411c4cf5 802-11-wireless never
Starlight-HotSpot 6842159f-94e9-4f5d-8a76-93b1f4c32912 802-11-wireless never
Dr_l)P35_21_22#342C5C e307b6e8-598b-4bd1-b47f-dc8a34ff12e0 802-11-wireless never
iPhone 5 9070ee1c-8ac1-4458-a590-cf241685b929 802-11-wireless never
Lummerland 7d2d4d89-9eb7-45c0-a963-a8a240e33100 802-11-wireless Mo 10 Feb 2014 22:35:03 CET
AndroidAP4089 ec201acc-9e6f-4c7b-9da8-6ed4539dde6c 802-11-wireless never
heartofgold 437e2b7d-5b76-4e3e-a101-698776293ede 802-11-wireless never
09FX09039648 e2cc4508-e18e-4c0c-a694-633ace5d095a 802-11-wireless never
WIRELESS_BEN_GURION_AIRPORT 8ff2846d-9e84-4175-8c4d-968e226e7a43 802-11-wireless never
Telekom 60aded97-f697-48c6-8070-91e1b786df82 802-11-wireless Mi 15 Apr 2015 00:14:14 CEST
Marriott_CONF a4ec2c22-8a31-46bb-a146-a02d8f7b2582 802-11-wireless never
TTH-Seminar4+5 23422256-0a4b-45b5-8394-43c92592564d 802-11-wireless never
FRITZ!Box 7330 SL a431f83e-6a73-4cf1-94ee-4409c19dfdb9 802-11-wireless Do 13 Feb 2014 18:45:48 CET
mycloud f3fcf1ae-eebb-4e37-a700-96035d0aea59 802-11-wireless never
ArcorWirelessLAN 428769c9-3dfc-43f0-a2dc-29f3c4ecb0fb 802-11-wireless never
TP-LINK_PocketAP_7329E2 a17a19b4-251e-41d7-a82e-1add52ec1317 802-11-wireless never
TTH-Taunus fbb24cae-c2aa-4057-be67-095b80604f5e 802-11-wireless never
BWI-WiFi 58e749f6-58e9-44fe-8528-6a1e3bacc88f 802-11-wireless never
uni vpn b28d76f8-57d2-4548-8959-9c47e189221e vpn never
Miri 🙂 74147f40-07c4-4e91-8a43-8c69bc733479 802-11-wireless never
wiew7 7193e045-9570-4d16-b4b6-d676fc19d05a 802-11-wireless never
ZurichAirport 0fe8cde1-de8b-48f9-9ca0-52ac129e9876 802-11-wireless never
mercure f3fbf40d-7c2b-4e6d-b5db-fce6888bb719 802-11-wireless never
HITRON-A600 501ac6d9-6912-46a1-9681-4a73e1009fe0 802-11-wireless never
HTC Portable Hotspot 61EF 3675cadc-251a-487c-8c4d-cf3747117f13 802-11-wireless never
FRITZ!Box 7312 a7d9b99d-63bc-4caa-b82c-1da9fec95cea 802-11-wireless never
o2-WLAN38 76a67dd7-3d50-4388-ba9f-07f52a328eca 802-11-wireless never
ALICE-WLAN36 e55d0ccd-7941-4da0-8ac8-99ed22adda49 802-11-wireless never
WEBPORTAL e2ce59eb-fa7d-4ece-88b7-bd0585a8d589 802-11-wireless never
FOSDEM 0cc3584b-1a5e-41fd-944e-df85f0a2d418 802-11-wireless never
BRITZ!Fox Fon WLAN 7360 SL fc70206c-04c8-4ae6-a0fd-7f769addf0b6 802-11-wireless never
Hotel Amsee3 f4ff9826-22d2-4f9b-91d9-8c63dc0f9bd4 802-11-wireless So 29 Mär 2015 11:56:21 CEST
ibis a348ed30-2fcf-4068-ba1c-176cdd9e7b10 802-11-wireless never
O2 Wifi 8935490d-313f-4fd0-b1fd-c78ab138af51 802-11-wireless never
FRITZ!Box Fon WLAN 7270 50d62587-5a48-4635-b30f-fbb1ded20455 802-11-wireless Di 30 Dez 2014 12:23:23 CET
Ambassador-Opera fdefea8c-d218-4632-bb07-b67ddb63f778 802-11-wireless never
HOTEL BB 05d9792c-6f9e-48a6-9c0c-6c03a9708954 802-11-wireless never
MBTA_WiFi_Car0353_Box-104 24a1527c-3083-44f3-8812-cacafb88cf7a 802-11-wireless never
theairline 54770eb9-78f2-4a3b-86cf-0e594a55c9f9 802-11-wireless never
republicansareontheirown 074ca340-643c-411e-95eb-d6e78f887fcf 802-11-wireless never
VPN/WEB 4a5f132b-81c2-49ba-8934-0257f98e5a54 802-11-wireless never
WLAN-6A1EA7 d2e80ec0-9bd3-4801-bada-65a695f7dc92 802-11-wireless never
Mahler 1d59c9a1-090c-4333-b2af-4154d5edd9c2 802-11-wireless never
guest-access c0ff03ac-a195-46a7-a507-5c1cf5c7f668 802-11-wireless never
Schlummerland 3d029511-2627-458a-a06b-e2c8e86d26b4 802-11-wireless never
MSDSL2 f2fbd0b7-26f6-4939-aece-c2c9d3621e40 802-11-wireless Do 16 Apr 2015 20:28:22 CEST
Urania 846e76bc-b96f-40aa-a30f-b6d2f84fd7f0 802-11-wireless never
CJDDSAWLN 55158cdb-07cc-4c02-8f44-a19ec2aeca06 802-11-wireless never
AndroidAP 375e398a-1b2e-4d2e-b4a5-4c05c904b109 802-11-wireless Fr 02 Jan 2015 16:29:17 CET
gesis-guest b9a00edd-9e28-4549-8f13-2634577c1276 802-11-wireless never
guests@WMDE 9aeccb7b-e629-4f0a-ab81-5300c7c433c0 802-11-wireless never
Wired connection 1 30ffcf0e-5181-41b0-b7d4-402de875889f 802-3-ethernet Mo 04 Mär 2013 15:10:44 CET
wlan a2f073a8-f0b0-4f58-a79a-2f22131016b1 802-11-wireless never
Wired connection 2 700ebb58-f049-4ec7-bac6-cd5801a975e2 802-3-ethernet Do 16 Apr 2015 18:11:44 CEST
Wired connection 3 bf70a943-9510-4883-ba64-a4bede826120 802-3-ethernet Do 16 Apr 2015 09:26:08 CEST
30 Min. Free Wi-Fi - Vodafone f5cd9efa-d735-41d5-a90f-ba83357f2e40 802-11-wireless never
WIKIMEDIA a40d0c35-8232-42db-b5a2-2118d69d6c41 802-11-wireless never
loganwifi 41e42d9c-efe9-45c5-9f8e-a2f5f441dd73 802-11-wireless never
WL3D efe99726-557a-4595-90e8-9c0fc3ee0c20 802-11-wireless never
m3connect 6ce7f634-5c44-4818-a055-cb32dca20738 802-11-wireless never
MIRI-PC_Network d1243377-9aed-4abf-8207-90b47840a48f 802-11-wireless never
innflux bd5be404-6a8b-402a-8a6f-856eaa054664 802-11-wireless never
LRTA24open2 dc95a098-107f-46c9-8d36-23cf575a319f 802-11-wireless never
LRTA24open3 5166cb9a-20ff-4d5c-b224-88f0b2276398 802-11-wireless never
Boingo Hotspot 954559e9-5f8c-4f9b-b2bc-36ff23f18d4a 802-11-wireless never

the interesting part is the fourth column which is luckily saying never if a connection was never used. so that is an easy grep:

nmcli c | grep "never"

from this list I need the second colum in particular the uuids. I could have done this with some awk magic but I decided greping for uuids should be easier so lets pipe the output to another grep:

nmcli c | grep "never" | grep -o -- "[0-9a-fA-F]\{8\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{12\}"

as we can see the regular expression is really straight forward. there are two arguments given first of all -o that is to only output what was matched and not the complete lines and the helps for greping hyphens. The cool thing is with having the web and search engines you don’t even have to build this regular expression on your own. Click on my search query for an blog article explaining the background: grep regular expression uuid
Next we have to iterate over the result and use it in the nwcli connection delete command. From the docu of nmcli we know

Usage: nmcli connection { COMMAND | help }
COMMAND := { list | status | up | down | delete }
list [id | uuid ]
status [id | uuid | path ]
up id | uuid [iface ] [ap ] [--nowait] [--timeout ]
down id | uuid
delete id | uuid

so the code should look something like:

nmcli connection delete uuid 954559e9-5f8c-4f9b-b2bc-36ff23f18d4a

only that 954559e9-5f8c-4f9b-b2bc-36ff23f18d4a will be replaced by all the uuids from my grep statements.
so let’s put this big grep in a loop and just echo everything to see if it is working. just use backticks around the grep and make a loop:

$ for i in `nmcli c | grep "never" | grep -o -- "[0-9a-fA-F]\{8\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{12\}"` ; do echo $i ; done
0f91bb0d-e2be-4f8a-a00e-457c8bdaf9a9
2d3ddca9-772f-47d9-99a1-1559640b0f25
6be3c5aa-fc85-415c-96e4-5583b25c23bb
22dd0f8e-be2d-4402-af82-7716222add75
5308ed9e-5def-430e-85f4-e1eb01426927
b72080e0-c5dd-492a-b8a3-017fe7d6099c
d7c8fc6c-64e3-435f-b2c0-316c1f11ddf1
1345bcda-26ab-4193-904c-088235d29873
a316f6be-bb4c-4501-867b-6928f9f429ef
b5c5462c-072e-4e90-a3b2-def467562579
7c6d1794-6d45-40aa-b7ac-f9a7f401d02e
dc7351ce-ca6c-413e-ae8b-a82c2093525e
4662b075-9519-4f69-8ce9-b938ea8270c1
5dab3493-cdd8-4d6c-a7a2-6def0836c2a5
b9c2f9d9-aeda-454b-941b-bb1ccfc0ad9a
bede9d3a-9fc1-4dc2-9c75-0d2686c225f1
17ac549c-574c-4bf4-a464-6b633f2a4ac3
17b18b79-411d-4d69-813d-6506f38b8ea5
c36c572e-e469-4e86-a50f-87d6d68f4f7e
0593b927-644b-4604-a479-4fc652b6050d
46b6496f-6583-4306-a58f-4e4d2bdd5d50
f6b80571-37d5-4bcd-b558-62084c61622b
379f9efd-b91d-433f-8ce2-209e4ed2099b
2f84b475-a852-49ac-a236-61b3f0bc9548
d789f49e-2643-4894-96a8-4e10ea3cbb69
17e4838f-bbaf-470c-bf24-9de948b42b00
25d3765f-dcca-4f40-872a-8ffcfe307f25
48227bb6-63e8-4a58-8b8f-1929f76d8ef2
13ff2ed5-8ecf-4d67-b039-a3bc4cce56d9
e763a256-4a2d-4813-b354-57ad8f642e2a
e4f342ae-d20b-4abc-9fe2-52b208d6b61b
860cf4c0-45eb-4a57-a01e-cfd54933387f
274cfaa8-2c08-4abc-bef3-3fd2608060b8
06b3b3bc-a264-4d05-a848-3b5d6df9eed0
3abf0848-b09c-457e-9c7a-958728f2c59f
ad9e4a07-b66f-4f3f-8f6d-4f5a81b2ed54
dec05243-06b6-4477-8813-0491b436f993
f844713b-a726-4b5b-98a5-3b35411c4cf5
6842159f-94e9-4f5d-8a76-93b1f4c32912
e307b6e8-598b-4bd1-b47f-dc8a34ff12e0
9070ee1c-8ac1-4458-a590-cf241685b929
ec201acc-9e6f-4c7b-9da8-6ed4539dde6c
437e2b7d-5b76-4e3e-a101-698776293ede
e2cc4508-e18e-4c0c-a694-633ace5d095a
8ff2846d-9e84-4175-8c4d-968e226e7a43
a4ec2c22-8a31-46bb-a146-a02d8f7b2582
23422256-0a4b-45b5-8394-43c92592564d
f3fcf1ae-eebb-4e37-a700-96035d0aea59
428769c9-3dfc-43f0-a2dc-29f3c4ecb0fb
a17a19b4-251e-41d7-a82e-1add52ec1317
fbb24cae-c2aa-4057-be67-095b80604f5e
58e749f6-58e9-44fe-8528-6a1e3bacc88f
b28d76f8-57d2-4548-8959-9c47e189221e
74147f40-07c4-4e91-8a43-8c69bc733479
7193e045-9570-4d16-b4b6-d676fc19d05a
0fe8cde1-de8b-48f9-9ca0-52ac129e9876
f3fbf40d-7c2b-4e6d-b5db-fce6888bb719
501ac6d9-6912-46a1-9681-4a73e1009fe0
3675cadc-251a-487c-8c4d-cf3747117f13
a7d9b99d-63bc-4caa-b82c-1da9fec95cea
76a67dd7-3d50-4388-ba9f-07f52a328eca
e55d0ccd-7941-4da0-8ac8-99ed22adda49
e2ce59eb-fa7d-4ece-88b7-bd0585a8d589
0cc3584b-1a5e-41fd-944e-df85f0a2d418
fc70206c-04c8-4ae6-a0fd-7f769addf0b6
a348ed30-2fcf-4068-ba1c-176cdd9e7b10
8935490d-313f-4fd0-b1fd-c78ab138af51
fdefea8c-d218-4632-bb07-b67ddb63f778
05d9792c-6f9e-48a6-9c0c-6c03a9708954
24a1527c-3083-44f3-8812-cacafb88cf7a
54770eb9-78f2-4a3b-86cf-0e594a55c9f9
074ca340-643c-411e-95eb-d6e78f887fcf
4a5f132b-81c2-49ba-8934-0257f98e5a54
d2e80ec0-9bd3-4801-bada-65a695f7dc92
1d59c9a1-090c-4333-b2af-4154d5edd9c2
c0ff03ac-a195-46a7-a507-5c1cf5c7f668
3d029511-2627-458a-a06b-e2c8e86d26b4
846e76bc-b96f-40aa-a30f-b6d2f84fd7f0
55158cdb-07cc-4c02-8f44-a19ec2aeca06
b9a00edd-9e28-4549-8f13-2634577c1276
9aeccb7b-e629-4f0a-ab81-5300c7c433c0
a2f073a8-f0b0-4f58-a79a-2f22131016b1
f5cd9efa-d735-41d5-a90f-ba83357f2e40
a40d0c35-8232-42db-b5a2-2118d69d6c41
41e42d9c-efe9-45c5-9f8e-a2f5f441dd73
efe99726-557a-4595-90e8-9c0fc3ee0c20
6ce7f634-5c44-4818-a055-cb32dca20738
d1243377-9aed-4abf-8207-90b47840a48f
bd5be404-6a8b-402a-8a6f-856eaa054664
dc95a098-107f-46c9-8d36-23cf575a319f
5166cb9a-20ff-4d5c-b224-88f0b2276398
954559e9-5f8c-4f9b-b2bc-36ff23f18d4a

that looks great so replace the echo with the real command:

nmcli connection delete uuid $i

this will lead to the follwing final command which is identical to the one on the top of the article but just with some line breaks for better readability.

$ for i in `nmcli c | \
grep "never" | \
grep -o -- "[0-9a-fA-F]\{8\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{4\}-[0-9a-fA-F]\{12\}"` ; \
do nmcli connection delete uuid $i ; \
done

Now I only have a few connections in my network manager so that I can easily switch connections from the command line depending on where I am:

$ nmcli c
NAME UUID TYPE TIMESTAMP-REAL
AndroidAP 375e398a-1b2e-4d2e-b4a5-4c05c904b109 802-11-wireless Fr 02 Jan 2015 16:29:17 CET
FRITZ!Box 7330 SL a431f83e-6a73-4cf1-94ee-4409c19dfdb9 802-11-wireless Do 13 Feb 2014 18:45:48 CET
NAS QNAP ef6a08df-1c7c-48ba-b476-f3ff3b1e2669 802-3-ethernet Mi 15 Apr 2015 11:03:51 CEST
MSDSL2 f2fbd0b7-26f6-4939-aece-c2c9d3621e40 802-11-wireless Do 16 Apr 2015 20:28:22 CEST
shared 806c9806-fadc-472f-8460-318babbd38ea 802-3-ethernet Fr 10 Apr 2015 09:39:50 CEST
Hotel Amsee3 f4ff9826-22d2-4f9b-91d9-8c63dc0f9bd4 802-11-wireless So 29 Mär 2015 11:56:21 CEST
Lummerland 7d2d4d89-9eb7-45c0-a963-a8a240e33100 802-11-wireless Mo 10 Feb 2014 22:35:03 CET
FRITZ!Box Fon WLAN 7270 50d62587-5a48-4635-b30f-fbb1ded20455 802-11-wireless Di 30 Dez 2014 12:23:23 CET
Arcor-362007-L 87a2d11c-3a3d-4b5e-a1b0-cc5b38916035 802-11-wireless Di 14 Apr 2015 17:42:33 CEST
Telekom 60aded97-f697-48c6-8070-91e1b786df82 802-11-wireless Mi 15 Apr 2015 00:14:14 CEST
Wired connection 3 bf70a943-9510-4883-ba64-a4bede826120 802-3-ethernet Do 16 Apr 2015 09:26:08 CEST
FRITZ!Box Fon WLAN 7360 7c428f50-078d-442a-9ba7-268853c888f0 802-11-wireless Mo 11 Feb 2013 23:39:08 CET
test 836af61a-8268-4a16-a5ab-036f3e0cf7e9 802-11-wireless So 15 Sep 2013 21:18:35 CEST
Wired connection 1 30ffcf0e-5181-41b0-b7d4-402de875889f 802-3-ethernet Mo 04 Mär 2013 15:10:44 CET
Telekom_ICE 8763adcc-f27b-441f-9184-594128871351 802-11-wireless So 29 Mär 2015 19:56:35 CEST
Wired connection 2 700ebb58-f049-4ec7-bac6-cd5801a975e2 802-3-ethernet Do 16 Apr 2015 18:11:44 CEST
WLAN-8CA902 3046307e-69a3-4c99-89fb-b79f7f9ee24b 802-11-wireless Mi 01 Apr 2015 11:42:27 CEST
uni-koblenz 8a0d1d51-672c-4ded-ad92-18e27b8215df 802-11-wireless Do 16 Apr 2015 18:11:45 CEST

]]>
https://www.rene-pickhardt.de/cleaning-up-my-network-connections-on-ubuntu-linux-using-the-network-manager-nmcli/feed/ 0
What happened to Vensenya's "Changing mindset" project? https://www.rene-pickhardt.de/what-happened-to-vensenyas-changing-mindset-project/ https://www.rene-pickhardt.de/what-happened-to-vensenyas-changing-mindset-project/#respond Thu, 02 Apr 2015 12:27:12 +0000 http://www.rene-pickhardt.de/?p=1950 Two and a half years ago I posted about Simons project which at that time was just starting and to me still very fuzzy. Still I donated 150 Euro and asked others to do the same. It was the trust I had in him that it would be working out great even though it was still not clear how.
Today I have received an email, that the project has become much more focused and will be finally going public in September 2015. The will produce a tv series that will be published on youtube. Have a look at their trailer in German language.

Together with youngsters they produce a series about the live and problems and challenges of youngsters. They try to focus on a growing mind set approach that comes from “I can never do this” and focuses on “I will be able to do this.” The best thing is the authenticity of the project. It is done with non professional actors, cameramen, cutters. Also the equipment is borrowed. It seems that the project will get a really high quality but is kind of low budget – right in the sense of: “Of course we can do this if we really want to and we don’t need much money.”
In that spirit they have a second crowd founding campaign (which I guess is much more about publicity than really attracting money) which I warmly recommend to support:
https://socialimpactfinance.startnext.com/kaempfergeist
I will certainly keep you up to date as soon as the result will be published! But first I will send an email to Simon and ask him if it will be possible to use an open license for the material. I guess they want to earn money by licensing but still for a social and crowd founded project I think an open license would be appropriate.

]]>
https://www.rene-pickhardt.de/what-happened-to-vensenyas-changing-mindset-project/feed/ 0
Creating an award winning video doesn’t need much technology or technical know how. https://www.rene-pickhardt.de/creating-an-award-winning-video-doesnt-need-much-technology-or-technical-know-how/ https://www.rene-pickhardt.de/creating-an-award-winning-video-doesnt-need-much-technology-or-technical-know-how/#respond Thu, 04 Dec 2014 15:18:18 +0000 http://www.rene-pickhardt.de/?p=1934 After I won the community award in the Wikipedia Video contest in the category documentation and interview with my pointer in C video I would like to share some experiences on creating educational videos. This is mainly to encourage anyone to do the same as I did.
Have a look at the winning video again if you don’t remember it

In my opinion it doesn’t take much more than a real interest in education. So the video that won the award was used for me in a real teaching scenario. I only had one dry run before recording my one and only and published version of it (which still with more iterations could be a little bit shorter, more focused and slicker). Most time (about 3 hours) for the process was in planning how to present the learning content – something everyone teaching something should do anyway. The entire time it took me was less than 5 hours including planning, dryrun, recording, uploading and sharing with students.
The impact: From originally 16 Students that where participents in my class the video has been watched about 10 thousand times by now. Especially it was included in the wikipedia article on pointers and thus is hopefully a helpful resource for people interested in that topic.
Most important I did not need expensive technology. As you can see from the attached picture I did not even have a proper way of fixing my digital camera. The microphone was the internal one from that very digital camera. I used a couple of books together with a ruler to bring the camera to the correct position in order to be able to have a nice shot of the whiteboard that I was using. Other than that I used two lamps for proper light and lowered the outside courtains of the window.

What I am basically saying: Everyone who owns a camera (which most people nowadays do) can take a video and explain something. You can contribute your explaining video to the growing knowledge base on wikimedia commons. You can contribute to the ongoing discussion weather wikipedia articles should be enhanced with videos or not. Most important if you do everything like me on the whiteboard you will most certainly not run into any of the copyright problems that I ran before.
So what are you waiting for? I am sure you are an expert on something. Go and give it a shot and share your video here in the comments but also via wikimedia commons and maybe include it even within some wikipedia article that is fitting well.

]]>
https://www.rene-pickhardt.de/creating-an-award-winning-video-doesnt-need-much-technology-or-technical-know-how/feed/ 0
About the future of Videos on Wikiversity, Wikipedia and Wikimedia Commons https://www.rene-pickhardt.de/about-the-future-of-videos-on-wikiversity-wikipedia-and-wikimedia-commons/ https://www.rene-pickhardt.de/about-the-future-of-videos-on-wikiversity-wikipedia-and-wikimedia-commons/#respond Tue, 04 Nov 2014 18:47:27 +0000 http://www.rene-pickhardt.de/?p=1923 In the following article I want to give an overview of the discussions and movements that are going on about video and multimedia content for wikipedia and her sister projects. I will start with some positive experiences and then tell you about some bad experiences. This article is not to wine about some edit war it is more about observing an undecided / open topic within the community of wikipedians.
During my time as a PhD student I actively contributed to open educational resources by uploading so far 52 educating videos to wikimedia commons. Some of those videos have been created together with Robert Naumann. Another share of the videos was uploaded by him. So a large fraction of those videos have been made for the web science mooc an can be found at:  
https://commons.wikimedia.org/wiki/Category:Videos_for_Web_Science_MOOC_on_Wikiversity
Last week we submitted the following video to the OPERA Award, which is an award for OER video material. It was established with the goal of having more of such content. 

As you can see it was selected to be the media file of the day on November 2nd on wikimedia commons (*cheering*) can anyone show me how this has happened? I was looking for the process but I did not find it.
Also I have included another video about pointers in C (in German language: Zeiger in C) into an wikipedia article. 

Does wikipedia like videos within articles?

From my experience the Pointer video was removed a couple of times from the german wikipedia article related to that topic and then also brought back to the article. So it seems like there isn’t any consensus within the community yet about having videos. Interestingly enough I was asked by some wikipedians to submit my video for a video competition they are doing. So the goal of this competition is to have more content creators like me to upload their material to commons and include it into Wikipedia articles. This effort seems to be founded by money which was donated by the users. There seems to be a similar project in the english wikipedia. So well at least money is flowing towards the direction of creating more video content. 
Even though these seem to be strong arguments I have the feeling that not the entire Wikipedia community supports this movement – or one could call it strategic move. 1 year ago without knowing about these kind of efforts I have tried to include some of the web science videos to wikipedia articles. For example I included the following video:

to the corresponding wikipedia article it was removed with a statement of saying this would be video SPAM which in my opinion is a little bit of an overreaction.
A summary of the discussion can be taken from my slides of my talk at the german open educational resources conference:
2014MoocOnWikiversity
If you are interested you can find the entire discussion at the discussion page of the ethernet frame article

Problems for creating video Content for Commons:

Obviously there is a problem about the copyright. So for example I have pointed out in the past that creating a screencast during lecture on a Windows machine means committing a copyright violation since the the start button and the windows interface by Microsoft EULA are protected by copyright. Also in former discussions at #OER13de we agreed that it is hard to collaboratively edit videos (sorry link in german language) because the software often is not free and wikimedia commons does not support uploading the source files of the videos anyway.

Conclusion

It is not clear if video content will survive in Wikipedia even though some strategic movement is put into that idea. The people who are against this have pretty decent arguments and I also say that it is really hard to have a tool for collaboratively editing video files. If one does not have such tools even access to the source files of the videos would make it hard for people to work on this together. So I am curious to see what the competitions will bring and how the discussions on movies will evolve over time.
At least in wikiversity we are able to use our videos for teaching as we anticipated and I am pretty sure this space won’t be affected by the ongoing discussion.

]]>
https://www.rene-pickhardt.de/about-the-future-of-videos-on-wikiversity-wikipedia-and-wikimedia-commons/feed/ 0
How to host your oer MOOC on wikiversity https://www.rene-pickhardt.de/how-to-host-your-oer-mooc-on-wikiversity/ https://www.rene-pickhardt.de/how-to-host-your-oer-mooc-on-wikiversity/#respond Tue, 04 Nov 2014 11:29:47 +0000 http://www.rene-pickhardt.de/?p=1913 Last year we have created a MOOC on Web Science. We had chosen Wikiversity as a platform for hosting the MOOC. The reason for this was the high trust we had in the Wikimedia foundation strengthening the open movement. The main problem we experienced with Wikiversity was that the software running Wikiversity is obviously a Mediawiki which is great for collaboratively building an encyclopedia. It is not so well suited to provide a learning environment in which students can focus on an interactive learning experience. Also it is hard for teachers to learn how to use the Mediawiki software.
So I decided to spend some time together with Sebastian Schlicht (my student assistant, who did an excellent job) to build a little bit more of infrastructure on top of the mediawiki on wikiversity to provide a better interface for learning. Watch the demo here:

As you can see we created a platform that supports:

  • A click and point experience for teachers to create classes
  • On page discussion for students which supports the standard discussion system in Mediawiki
  • a nice modern navigation which adapts to users while interacting with the page

For me with this system our videos, quizes and scripts content shines in a much brighter light than it did before. For the first time I have fun consuming our the content of the MOOC.
For me this was an important step towards my goal of freeing educational content. Not only that our MOOC is completely OER we now also create core infrastructure for any teacher to create more classes that are OER. If you consider doing such a class feel free to drop be a message and receive free support. You could also start reading the documentation of the MOOC-Interface or see the slides(: 
2014MoocOnWikiversity
I am looking forward to hear back from you.

]]>
https://www.rene-pickhardt.de/how-to-host-your-oer-mooc-on-wikiversity/feed/ 0
Copyright violations: Videos from our OER Web Science MOOC deleted from Wikimedia commons https://www.rene-pickhardt.de/copyright-violations-videos-from-our-oer-web-science-mooc-deleted-from-wikimedia-commons/ https://www.rene-pickhardt.de/copyright-violations-videos-from-our-oer-web-science-mooc-deleted-from-wikimedia-commons/#comments Tue, 04 Nov 2014 11:08:38 +0000 http://www.rene-pickhardt.de/?p=1860 I understand that the following article is written in a very personal way. But this thing seems to me so unjust that it is just unbelievable. So this is my sad story of me trying to bring free educational resources to the world and having Microsoft indirectly not allowing me to do so 🙁 The following article is dedicated to Aaron Swartz:

Background:

Copyright is f*** up on this planet.

We have been creating almost all of our so far 69 produced videos by ourselves. The videos which which we did not produce ourselves have been published under a creative commons by licence by the copyright owners. In one case I even called a professor in the united states and asked him to change the licence of his videos on Youtube such that we could reuse them within the Wikimedia commons ecosystem which he did (:
So you might think everything is alright. The guys paid attention to proper licences if they used material by others and for the rest they created everything themselves. Unfortunately this is not true.
For some of our Flipped classroom sessions we created hangouts on air with screen casts of our Smartbord. Currently our university only supportes the smartboard software SMARTNotebook on Microsoft Windows. Creating a Screencast on a Microsoft operating system is critical since there is the Microsoft Start button visible and also the user interface of SMARTNotebook. At least the microsoft interfaces are protected under copyright and I belief similar constraints will hold for SMARTNotebook. This has the consequence that we cannot put a creative commons licence to these materials. Consequently we must not host the materials on Wikimedia Commons as wikimedia commons supports only free content.
What we can do now is to move the videos to Wikiversity which allows material with a fair use licence. Ok great I can still host my course but parts of it are not free anymore. Don’t be afraid you don’t have to pay, like you have to at other sites. But you loose a lot of your freedom. You cannot remix, correct, translate, […] the videos. In particular I am not even sure if I am legally allowed to publish the videos under the terms of Fair Use. I am not an American citizen and my university clearly is a German institution. The Fair Use law is an United States law. Ok we are hosting the materials on an American Website but will this be sufficient? Last time I had a similar law problem and asked the law consultants from our university the only answer I received was: “Better take the material down. You don’t want to end up in a law fight”. Ok so not only we have absurd laws influenced from money making industries, we are also scared of the industries.
On the other side being forced to move to Fair Use licence will allow me to include a lot of creative commons materials where the NC tag is placed to the licence. Not that I now don’t want to do any open educational resources. But the quality of the MOOC also suffered from not being able to include CC-NC material. 

Think about this again:

We as a university – and in the very end as a society, since the university is payed by tax money – pay high licence fees to Microsoft in order to be allowed to use their crappy Software. We are then forced by the administration that if we want to use modern technology like smartboards we have to use Microsoft Software. We pay high wages for professors, me and technical staff to create an free and open online course. And now Microsoft – which I did not even choose to use  but was forced to use by our university which is just following the the majority vote of computer users – is telling me that I cannot publish the content I  created under the license that I want.
You might say: Hey guy calm down. What’s the problem? The course is still online and nothing has changed. But that is the problem that everything has changed. We don’t pay attention to the subtleties as a society and wonder why we are having unjust laws.

Conclusions:

We need to think about our law. It is us who makes them anyway! Regional laws are conflicting with the idea of a global network (Fair Use for example). Many ideas of copyright are just not suitable to a tech driven world in which sharing, citing and giving attribution and fame to people who create something has been fundamentally changed. These laws like the ones mentioned are just outdated an ridiculous. Also other laws like https://de.wikipedia.org/wiki/Depublizieren (sorry for a link to German wikipedia. I might translate the article at some point in time) fall into this category.

]]>
https://www.rene-pickhardt.de/copyright-violations-videos-from-our-oer-web-science-mooc-deleted-from-wikimedia-commons/feed/ 1