Usually I would give someone like Bram Cohen the benefit of doubt, but this paragraph:
“We want people to use and adopt BitTorrent Live. But
we aren’t planning on encouraging alternative
implementation because it’s a tricky protocol to
implement and poorly behaved peers can impact everyone.
We want to ensure a quality experience for all and this
is the best approach for us to take,” Cohen told
TorrentFreak.
So they are relying on the patent system to ensure that every peer is playing nicely? In a p2p protocol? What could possibly go wrong?
It might be that they don't need every peer to play nice, but a quorum of peers. If that were the case, a client with a buggy implementation could easily gain popularity quickly and mess things up, but it would take an attacker with a botnet to do it intentionally.
It also looks like they're making this free as in beer:
> Bram Cohen explains that the patent is in no way going to restrict user’ access to the new protocol, quite the contrary. BitTorrent Live will be available to end users for free, and publishers who are using the service and hosting it on their own will not be charged either.
I really like Bram, but what is revolutionary here?
I was pretty heavily involved in p2p field back in the early '00 and I've read extensively on the subject. Even back then there was plenty of research into peer-casting, including peer clustering by proximity metrics. The idea is bloody obvious, and it all inevitably boils down to constructing an efficient and resilient overlay networks, which is a very well researched domain. The reason there's not much of it implemented is because there is always a simpler (read - dumber) solutions that worked just as well in practice. Think YouTube vs. Joost.
Can anyone with a more recent exposure to p2p stuff comment on whether this is indeed an innovation or is it just a PR spin on a patent application?
The patent is about breaking peers out into small sub-swarms to look after each other. The more healthy peers in each group focusing to keep their's groups healthy while also keeping up with the larger swarm. The greater BitTorrent protocol doesn't do this and, judging by the patent filing, other p2p streaming applications don't either, hence why they fall down upon high demand.
1. We learn that the intelligence will always live on the edges of the network - in software not in routers
2. We learn that network providers are just utilities. And should start to act like them. Net neutrality is merely one item in the list
3. We learn that multi-cast is back
4. We learn that total volume of data sent is the same even if it is sent efficiently from local to local - the networks will need upgrading to handle the volume - and the utilities had better learn to accept they are capital businesses again and stop trying to do marketing
After working for Bittorrent (I recently left for job that fit me a little better) on the Live team for almost two years (more so on all the software supporting the actual core protocol itself which Bram himself works on mostly..) it is awesome to see Live finally being released to the public. I cannot wait to see how the public will use it (I am sure anyone with an imagination can think of a thousand ways a P2P Live streaming protocol could be useful and powerful..). It will be interesting to see how usable people find it and if they end up adopting it or sticking with the current RTMP server client style architecture.
One thing I always found interesting while working on Live is that although in some ways it really seems like Live Video streaming is more or less an undeveloped field there is actually already a super successful P2P live video streaming implementation called PPLive that is BIG in China ( http://en.wikipedia.org/wiki/PPLive ).
Another interesting thing to check out if this Live Video streaming stuff interests you is that some guys proposed a Live Video streaming protocol VERY similar to Live's sometime recently:
Just compare the BitTorrent Live Protocol and this proposal..
One interesting thing that Brams implementation does is actually speed up and slow down the playback of traffic depending on the latency and whether or not Live figures you need to buffer more or can afford to have less of a delay/buffer. Talk to Bram and you'll quickly figure out he is obsessed with low latency..
This ends up being funny in implementation too, as you will notice when watching a stream that the playback will speed and slowdown while you watch it...
This is detailed in the patent but I did not see anything similar in the ppsp protocol..
> Live Video streaming is more or less an undeveloped field there is actually already a super successful P2P live video streaming implementation called PPLive that is BIG in China
Indeed, so the situation is now for 'future of TV':
- 1+ million of users of proven technology (PPLive)
- patented technology after 5 years of dev work released (Bittorrent)
- Open Source reference implementation of open upcoming IETF Internet standard (PPSP)
> One interesting thing that Brams implementation does is actually speed up and slow down the playback of traffic depending on the latency
> This is detailed in the patent but I did not see anything similar in the ppsp protocol..
Why link the network with the codec? From an architecture viewpoint I would consider this a 'layering violation'.
For many years VLC has support for dynamic playback speed:
http://forum.videolan.org/viewtopic.php?t=50581
Why is live streaming not more popular? In my opinion due to lack of quality. If we put the average upload capacity of Internet users at 800 kbps, that is the maximum donation you get. User donations limit the bitrate and quality of the live stream. Video quality at 800 kbps is unacceptable on HD laptop displays and 1080p televisions. As Prof. Keith Ross wrote many years ago: we need upload-view decoupling (http://cis.poly.edu/~ross/papers/VUDSystemMini.pdf). For HD quality live streaming with P2P, users need to donate also bandwidth when not watching. Unfortunately, going beyond T4T is an open scientific problem.
But does it do it reliably and well? I get nothing but buffering problems, hell I'm rarely even able to get up to speed even to view the stream. Even if I get on, the quality is too variable and skips are abound.
It looks like BT's angle (different from Sopcast) is to break up the large swarms into smaller groupings that are loosely connected to others.
Actually this strikes me as a fascinating opportunity.
This underlines how dead DRM is. But it also gives a new opportunity to provide a service for the majority of people.
BT provides some large % of all home ADSL routers in the UK, and they slice off a % of each router for their "wifi-anywhere" service - its roaming for BT subscribers, you park outside my house, you get to use my cordoned-off router bandwidth and vice versa.
Now the majority of problems will come from "poporly-behaved clients" - but if the majority of clients are simply running in the backgoround on most routers most clients will be well behaved
Why? I didn't go too deep but this seems to be a transport mechanism not an audio or video codec. I'm not sure why this couldn't be used in conjunction with a DRM system (and likely will need to be if they want real commercial adoption)
With BT-Fon the ADSL owner's traffic has priority over the BT-Fon traffic if push comes to shove.
So someone can use up to 512kbps of my ADSL connection, unless I need it, in which case they get throttled down to nothing.
(The traffic is also segregated in that BT-Fon stuff goes down the wire as a separate IP address from my own ADSL connection data. I'm not sure if multiple people on Fon on a single wireless router get unique IPs or not.)
It doesn't look particularly revolutionary, though I haven't examined the graph properties of the swarm that would result from this system.
That said, a side interest of mine is investigating applications of fountain codes to video streaming. There are a few papers out there and I'm slowly building up the knowledge (and courage) to implement something in that area..
zattoo did the same 5 years ago. they replaced it with a server to client system for various reasons.
bandwidth is too cheep now and with HLS we have a technology that not only works on almost all devices out of the box (flash, ios, android) but is also cacheable on various levels.
nice technology, but i guess it will occupy a niche.
"it’s a tricky protocol to implement and poorly behaved peers can impact everyone"
Or to put it another way it an unstable protocol where users could hold broadcasters to ransom.
Given the publicity, it's going to have a rocky time. Pretty much anyone has standing to raise an objection to a patent, so you can expect that competitors, anyone with a threatened business model etc will be raising objections all over the place.
The article says "screaming" when I guess it should say "streaming", a number of times. Or? Is "screaming" used in some technical sense with BitTorrent? I did Google it but came up empty.
In the most basic mode of operation, a Scream client sends a UDP request packet to a Scream server at a regular interval. The Scream server transmits GCF blocks with some additional information to any clients that have sent a recent request. The usual port number (both TCP and UDP) for Scream is 1567
[+] [-] yk|13 years ago|reply
[+] [-] eurleif|13 years ago|reply
It also looks like they're making this free as in beer:
> Bram Cohen explains that the patent is in no way going to restrict user’ access to the new protocol, quite the contrary. BitTorrent Live will be available to end users for free, and publishers who are using the service and hosting it on their own will not be charged either.
[+] [-] astrodust|13 years ago|reply
[+] [-] muyuu|13 years ago|reply
I will have to read the details very closely but on principle this reeks of monopolistic practices. Doesn't matter it's Bram Cohen.
[+] [-] huhtenberg|13 years ago|reply
I was pretty heavily involved in p2p field back in the early '00 and I've read extensively on the subject. Even back then there was plenty of research into peer-casting, including peer clustering by proximity metrics. The idea is bloody obvious, and it all inevitably boils down to constructing an efficient and resilient overlay networks, which is a very well researched domain. The reason there's not much of it implemented is because there is always a simpler (read - dumber) solutions that worked just as well in practice. Think YouTube vs. Joost.
Can anyone with a more recent exposure to p2p stuff comment on whether this is indeed an innovation or is it just a PR spin on a patent application?
[+] [-] hack_edu|13 years ago|reply
[+] [-] bramcohen|13 years ago|reply
[+] [-] Confusion|13 years ago|reply
[+] [-] lifeisstillgood|13 years ago|reply
2. We learn that network providers are just utilities. And should start to act like them. Net neutrality is merely one item in the list
3. We learn that multi-cast is back
4. We learn that total volume of data sent is the same even if it is sent efficiently from local to local - the networks will need upgrading to handle the volume - and the utilities had better learn to accept they are capital businesses again and stop trying to do marketing
5. And we learn that YouTube will rule the world
[+] [-] bbarrows|13 years ago|reply
One thing I always found interesting while working on Live is that although in some ways it really seems like Live Video streaming is more or less an undeveloped field there is actually already a super successful P2P live video streaming implementation called PPLive that is BIG in China ( http://en.wikipedia.org/wiki/PPLive ).
Another interesting thing to check out if this Live Video streaming stuff interests you is that some guys proposed a Live Video streaming protocol VERY similar to Live's sometime recently:
http://tools.ietf.org/html/draft-ietf-ppsp-peer-protocol-02
Just compare the BitTorrent Live Protocol and this proposal..
One interesting thing that Brams implementation does is actually speed up and slow down the playback of traffic depending on the latency and whether or not Live figures you need to buffer more or can afford to have less of a delay/buffer. Talk to Bram and you'll quickly figure out he is obsessed with low latency..
This ends up being funny in implementation too, as you will notice when watching a stream that the playback will speed and slowdown while you watch it...
This is detailed in the patent but I did not see anything similar in the ppsp protocol..
[+] [-] synctext|13 years ago|reply
Indeed, so the situation is now for 'future of TV':
- 1+ million of users of proven technology (PPLive)
- patented technology after 5 years of dev work released (Bittorrent)
- Open Source reference implementation of open upcoming IETF Internet standard (PPSP)
> One interesting thing that Brams implementation does is actually speed up and slow down the playback of traffic depending on the latency
> This is detailed in the patent but I did not see anything similar in the ppsp protocol..
Why link the network with the codec? From an architecture viewpoint I would consider this a 'layering violation'. For many years VLC has support for dynamic playback speed: http://forum.videolan.org/viewtopic.php?t=50581
Why is live streaming not more popular? In my opinion due to lack of quality. If we put the average upload capacity of Internet users at 800 kbps, that is the maximum donation you get. User donations limit the bitrate and quality of the live stream. Video quality at 800 kbps is unacceptable on HD laptop displays and 1080p televisions. As Prof. Keith Ross wrote many years ago: we need upload-view decoupling (http://cis.poly.edu/~ross/papers/VUDSystemMini.pdf). For HD quality live streaming with P2P, users need to donate also bandwidth when not watching. Unfortunately, going beyond T4T is an open scientific problem.
Discaimer: I'm part of the PPSP streaming team. Note that -02 is outdated, latest: http://tools.ietf.org/html/draft-ietf-ppsp-peer-protocol-06
Shameless plug; Open Source competitor: https://github.com/Tribler/libswift Android view/inject client available
[+] [-] guard-of-terra|13 years ago|reply
People use it to watch sports for example.
[+] [-] hack_edu|13 years ago|reply
It looks like BT's angle (different from Sopcast) is to break up the large swarms into smaller groupings that are loosely connected to others.
[+] [-] stefantalpalaru|13 years ago|reply
[1] http://en.wikipedia.org/wiki/PeerCast
[+] [-] newman314|13 years ago|reply
[+] [-] lifeisstillgood|13 years ago|reply
This underlines how dead DRM is. But it also gives a new opportunity to provide a service for the majority of people.
BT provides some large % of all home ADSL routers in the UK, and they slice off a % of each router for their "wifi-anywhere" service - its roaming for BT subscribers, you park outside my house, you get to use my cordoned-off router bandwidth and vice versa.
Now the majority of problems will come from "poporly-behaved clients" - but if the majority of clients are simply running in the backgoround on most routers most clients will be well behaved
[+] [-] mikeryan|13 years ago|reply
Why? I didn't go too deep but this seems to be a transport mechanism not an audio or video codec. I'm not sure why this couldn't be used in conjunction with a DRM system (and likely will need to be if they want real commercial adoption)
[+] [-] rayiner|13 years ago|reply
Nah, this will just be used to make DRM-ed services like Steam and Netflix and Hulu and Spotify faster.
And users will love it.
[+] [-] alexkus|13 years ago|reply
So someone can use up to 512kbps of my ADSL connection, unless I need it, in which case they get throttled down to nothing.
(The traffic is also segregated in that BT-Fon stuff goes down the wire as a separate IP address from my own ADSL connection data. I'm not sure if multiple people on Fon on a single wireless router get unique IPs or not.)
[+] [-] admg|13 years ago|reply
[+] [-] adamduren|13 years ago|reply
[+] [-] mikegioia|13 years ago|reply
[+] [-] archivator|13 years ago|reply
That said, a side interest of mine is investigating applications of fountain codes to video streaming. There are a few papers out there and I'm slowly building up the knowledge (and courage) to implement something in that area..
[+] [-] yawniek|13 years ago|reply
bandwidth is too cheep now and with HLS we have a technology that not only works on almost all devices out of the box (flash, ios, android) but is also cacheable on various levels.
nice technology, but i guess it will occupy a niche.
[+] [-] marssaxman|13 years ago|reply
[+] [-] ollybee|13 years ago|reply
[+] [-] jbk|13 years ago|reply
[+] [-] DomBlack|13 years ago|reply
[+] [-] woah|13 years ago|reply
[+] [-] pbrumm|13 years ago|reply
[+] [-] jacques_chester|13 years ago|reply
[+] [-] unwind|13 years ago|reply
[+] [-] mikegioia|13 years ago|reply
This was the best I could find: http://www.guralp.com/documents/SWA-RFC-SCRM.pdf