So finally you're testing your frontend JavaScript code? Great! The more you
write tests, the more confident you are with your code… but how much precisely?
That's where code coverage might
help.
@@ -15,9 +16,9 @@ help.
actually works…
Drinking game for web devs:
- (1) Think of a noun
- (2) Google "<noun>.js"
- (3) If a library with that name exists - drink
— Shay Friedman (@ironshay)
+ (1) Think of a noun
+ (2) Google "<noun>.js"
+ (3) If a library with that name exists - drink
Blanket.js is an easy to install, easy to configure,
@@ -105,13 +106,16 @@ describe("Cow", function() {
Notes:
-
Notice the data-cover attribute we added to the script tag
+
+
Notice the data-cover attribute we added to the script tag
loading the source of our library;
The HTML test file must be served over HTTP for the adapter to
be loaded.
-
Running the tests now gives us something like this:
+
+
Running the tests now gives us something like this:
-
+
+
As you can see, the report at the bottom highlights that we haven't actually
tested the case where an error is raised in case a target name is missing.
We've been informed of that, nothing more, nothing less. We simply know
diff --git a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/002/expected.html b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/002/expected.html
index 0525100d6..564f9a915 100644
--- a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/002/expected.html
+++ b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/002/expected.html
@@ -1,4 +1,6 @@
-
For more than a decade the Web has used XMLHttpRequest (XHR) to achieve
+
+
+
For more than a decade the Web has used XMLHttpRequest (XHR) to achieve
asynchronous requests in JavaScript. While very useful, XHR is not a very
nice API. It suffers from lack of separation of concerns. The input, output
and state are all managed by interacting with one object, and state is
@@ -13,10 +15,12 @@
The Fetch specification, which
defines the API, nails down the semantics of a user agent fetching a resource.
This, combined with ServiceWorkers, is an attempt to:
-
Improve the offline experience.
+
+
Improve the offline experience.
Expose the building blocks of the Web to the platform as part of the
extensible web movement.
-
As of this writing, the Fetch API is available in Firefox 39 (currently
+
+
As of this writing, the Fetch API is available in Firefox 39 (currently
Nightly) and Chrome 42 (currently dev). Github has a Fetch polyfill.
Feature detection
@@ -29,8 +33,7 @@
The most useful, high-level part of the Fetch API is the fetch() function.
In its simplest form it takes a URL and returns a promise that resolves
to the response. The response is captured as a Response object.
The fetch() function’s arguments are the same as those passed
to the
- Request() constructor, so you may directly pass arbitrarily
+
+Request() constructor, so you may directly pass arbitrarily
complex requests to fetch() as discussed below.
Headers
Fetch introduces 3 interfaces. These are Headers, Request and
- Response. They map directly to the underlying HTTP concepts,
+
+Response. They map directly to the underlying HTTP concepts,
but have
- certain visibility filters in place for privacy and security reasons,
+ certain visibility filters in place for privacy and security reasons,
such as
- supporting CORS rules and ensuring cookies aren’t readable by third parties.
+ supporting CORS rules and ensuring cookies aren’t readable by third parties.
Some of these operations are only useful in ServiceWorkers, but they provide
- a much nicer API to Headers.
+ a much nicer API to Headers.
Since Headers can be sent in requests, or received in responses, and have
various limitations about what information can and should be mutable, Headers objects
have a guard property. This is not exposed to the Web, but
it affects which mutation operations are allowed on the Headers object.
- Possible values are:
-
“none”: default.
+ Possible values are:
+
+
“none”: default.
“request”: guard for a Headers object obtained from a Request (Request.headers).
“request-no-cors”: guard for a Headers object obtained from a Request
created
- with mode “no-cors”.
+ with mode “no-cors”.
“response”: naturally, for Headers obtained from Response (Response.headers).
“immutable”: Mostly used for ServiceWorkers, renders a Headers object
- read-only.
-
The details of how each guard affects the behaviors of the Headers object
+ read-only.
+
+
The details of how each guard affects the behaviors of the Headers object
are
- in the specification. For example,
+ in the specification. For example,
you may not append or set a “request” guarded Headers’ “Content-Length”
header. Similarly, inserting “Set-Cookie” into a Response header is not
allowed so that ServiceWorkers may not set cookies via synthesized Responses.
All of the Headers methods throw TypeError if name is not a
valid HTTP Header name. The mutation operations will throw TypeError
if there is an immutable guard. Otherwise they fail silently. For example:
-
-
var res = Response.error();
+
var res = Response.error();try{
res.headers.set("Origin","http://mybank.com");}catch(e){
console.log("Cannot pretend to be a bank!");}
The simplest Request is of course, just a URL, as you may do to GET a
resource.
-
-
var req =new Request("/index.html");
+
var req =new Request("/index.html");
console.log(req.method);// "GET"
console.log(req.url);// "http://example.com/index.html"
-
+
You may also pass a Request to the Request() constructor to
create a copy.
- (This is not the same as calling the clone() method, which
+ (This is not the same as calling the clone() method, which
is covered in
- the “Reading bodies” section.).
-
-
var copy =new Request(req);
+ the “Reading bodies” section.).
+
var copy =new Request(req);
console.log(copy.method);// "GET"
console.log(copy.url);// "http://example.com/index.html"
-
+
Again, this form is probably only useful in ServiceWorkers.
The non-URL attributes of the Request can only be set by passing
initial
- values as a second argument to the constructor. This argument is a dictionary.
-
-
var uploadReq =new Request("/uploadImage",{
+ values as a second argument to the constructor. This argument is a dictionary.
+
var uploadReq =new Request("/uploadImage",{
method:"POST",
headers:{"Content-Type":"image/png",},
body:"image data"});
-
+
The Request’s mode is used to determine if cross-origin requests lead
to valid responses, and which properties on the response are readable.
Legal mode values are "same-origin", "no-cors" (default)
@@ -181,15 +180,14 @@ console.log(copy.url);
The "same-origin" mode is simple, if a request is made to another
origin with this mode set, the result is simply an error. You could use
this to ensure that
- a request is always being made to your origin.
-
-
var arbitraryUrl = document.getElementById("url-input").value;
+ a request is always being made to your origin.
+
var arbitraryUrl = document.getElementById("url-input").value;
fetch(arbitraryUrl,{ mode:"same-origin"}).then(function(res){
console.log("Response succeeded?", res.ok);},function(e){
console.log("Please enter a same-origin URL!");});
-
+
The "no-cors" mode captures what the web platform does by default
for scripts you import from CDNs, images hosted on other domains, and so
on. First, it prevents the method from being anything other than “HEAD”,
@@ -202,20 +200,19 @@ fetch(arbitraryUrl,{ mode:
"cors" mode is what you’ll usually use to make known cross-origin
requests to access various APIs offered by other vendors. These are expected
to adhere to
- the CORS protocol.
+ the CORS protocol.
Only a limited set of
headers is exposed in the Response, but the body is readable. For example,
you could get a list of Flickr’s most interesting photos
today like this:
-
-
var u =new URLSearchParams();
+
var u =new URLSearchParams();
u.append('method','flickr.interestingness.getList');
u.append('api_key','<insert api key here>');
u.append('format','json');
u.append('nojsoncallback','1');
-
+
var apiCall = fetch('https://api.flickr.com/services/rest?'+ u);
-
+
apiCall.then(function(response){return response.json().then(function(json){// photo is a list of photos.
@@ -226,25 +223,26 @@ apiCall.then(function(respon
console.log(photo.title);});});
-
+
You may not read out the “Date” header since Flickr does not allow it
via
- Access-Control-Expose-Headers.
-
-
response.headers.get("Date");// null
-
+
+Access-Control-Expose-Headers.
+
response.headers.get("Date");// null
+
The credentials enumeration determines if cookies for the other
domain are
- sent to cross-origin requests. This is similar to XHR’s withCredentials
- flag, but tri-valued as "omit" (default), "same-origin" and "include".
+ sent to cross-origin requests. This is similar to XHR’s withCredentials
+ flag, but tri-valued as "omit" (default), "same-origin" and "include".
The Request object will also give the ability to offer caching hints to
the user-agent. This is currently undergoing some security review.
Firefox exposes the attribute, but it has no effect.
Requests have two read-only attributes that are relevant to ServiceWorkers
- intercepting them. There is the string referrer, which is
+ intercepting them. There is the string referrer, which is
set by the UA to be
- the referrer of the Request. This may be an empty string. The other is
- context which is a rather large enumeration defining
+ the referrer of the Request. This may be an empty string. The other is
+
+context which is a rather large enumeration defining
what sort of resource is being fetched. This could be “image” if the request
is from an
<img>tag in the controlled document, “worker” if it is an attempt to load a
@@ -265,39 +263,40 @@ apiCall.then(function(respon
The url attribute reflects the URL of the corresponding request.
Response also has a type, which is “basic”, “cors”, “default”,
“error” or
- “opaque”.
-
"basic": normal, same origin response, with all headers exposed
+ “opaque”.
+
+
"basic": normal, same origin response, with all headers exposed
except
- “Set-Cookie” and “Set-Cookie2″.
"error": network error. No useful information describing
the error is available. The Response’s status is 0, headers are empty and
immutable. This is the type for a Response obtained from Response.error().
The “error” type results in the fetch() Promise rejecting with
+
+
The “error” type results in the fetch() Promise rejecting with
TypeError.
There are certain attributes that are useful only in a ServiceWorker scope.
The
- idiomatic way to return a Response to an intercepted request in ServiceWorkers
+ idiomatic way to return a Response to an intercepted request in ServiceWorkers
is:
As you can see, Response has a two argument constructor, where both arguments
are optional. The first argument is a body initializer, and the second
is a dictionary to set the status, statusText and headers.
The static method Response.error() simply returns an error
response. Similarly, Response.redirect(url, status) returns
a Response resulting in
- a redirect to url.
+ a redirect to url.
Dealing with bodies
@@ -305,7 +304,8 @@ apiCall.then(function(respon
over it because of the various data types body may contain, but we will
cover it in detail now.
A body is an instance of any of the following types.
FormData –
currently not supported by either Gecko or Blink. Firefox expects to ship
this in version 39 along with the rest of Fetch.
-
In addition, Request and Response both offer the following methods to
+
+
In addition, Request and Response both offer the following methods to
extract their body. These all return a Promise that is eventually resolved
with the actual content.
This is a significant improvement over XHR in terms of ease of use of
+
+
This is a significant improvement over XHR in terms of ease of use of
non-text data!
Request bodies can be set by passing body parameters:
-
-
var form =new FormData(document.getElementById('login-form'));
+
var form =new FormData(document.getElementById('login-form'));
fetch("/login",{
method:"POST",
body: form
})
-
+
Responses take the first argument as the body.
-
-
var res =new Response(new File(["chunk","chunk"],"archive.zip",
+
var res =new Response(new File(["chunk","chunk"],"archive.zip",{ type:"application/zip"}));
-
+
Both Request and Response (and by extension the fetch() function),
will try to intelligently determine the content type.
Request will also automatically set a “Content-Type” header if none is
@@ -356,18 +357,17 @@ fetch("/login",{
It is important to realise that Request and Response bodies can only be
read once! Both interfaces have a boolean attribute bodyUsed to
determine if it is safe to read or not.
-
-
var res =new Response("one time use");
+
var res =new Response("one time use");
console.log(res.bodyUsed);// false
res.text().then(function(v){
console.log(res.bodyUsed);// true});
console.log(res.bodyUsed);// true
-
+
res.text().catch(function(e){
console.log("Tried to read already consumed Response");});
-
+
This decision allows easing the transition to an eventual stream-based Fetch
API. The intention is to let applications consume data as it arrives, allowing
for JavaScript to deal with larger files like videos, and perform things
@@ -381,22 +381,21 @@ res.text().catch(clone() MUST
be called before the body of the corresponding object has been used. That
is, clone() first, read later.
@@ -409,7 +408,11 @@ res.text().catch(Fetch and
ServiceWorkerspecifications.
For a better web!
-
The author would like to thank Andrea Marchesini, Anne van Kesteren and Ben
+
The author would like to thank Andrea Marchesini, Anne van Kesteren and Ben
Kelly for helping with the specification and implementation.
-
\ No newline at end of file
+
+
+
+
+
\ No newline at end of file
diff --git a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/003-metadata-preferred/expected.html b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/003-metadata-preferred/expected.html
index 6b03dd384..b282bddee 100644
--- a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/003-metadata-preferred/expected.html
+++ b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/003-metadata-preferred/expected.html
@@ -1,4 +1,5 @@
+
Test document title
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
diff --git a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/004-metadata-space-separated-properties/expected.html b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/004-metadata-space-separated-properties/expected.html
index 6b03dd384..b282bddee 100644
--- a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/004-metadata-space-separated-properties/expected.html
+++ b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/004-metadata-space-separated-properties/expected.html
@@ -1,4 +1,5 @@
+
Test document title
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod
tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam,
diff --git a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/aclu/expected-metadata.json b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/aclu/expected-metadata.json
index 9b2070388..eeccb9e70 100644
--- a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/aclu/expected-metadata.json
+++ b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/aclu/expected-metadata.json
@@ -1,8 +1,8 @@
{
- "Author": "By Daniel Kahn Gillmor, Senior Staff Technologist, ACLU Speech, Privacy, and Technology Project",
- "Direction": "ltr",
- "Excerpt": "I don't use Facebook. I'm not technophobic — I'm a geek. I've been using email since the early 1990s, I have accounts on hundreds of services around the net, and I do software development and internet protocol design both for work and for fun. I believe that a globe-spanning communications network like the internet can be a positive social force, and I publish much of my own work on the open web.",
+ "Author": "Daniel Kahn Gillmor",
+ "Direction": null,
+ "Excerpt": "Facebook collects data about people who have never even opted in. But there are ways these non-users can protect themselves.",
"Image": "https:\/\/www.aclu.org\/sites\/default\/files\/styles\/metatag_og_image_1200x630\/public\/field_share_image\/web18-facebook-socialshare-1200x628-v02.png?itok=p77cQjOm",
- "Title": "Facebook Is Tracking Me Even Though I’m Not on Facebook",
+ "Title": "Facebook Is Tracking Me Even Though I\u2019m Not on Facebook",
"SiteName": "American Civil Liberties Union"
-}
+}
\ No newline at end of file
diff --git a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/aclu/expected.html b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/aclu/expected.html
index 15801438e..8efcda5b0 100644
--- a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/aclu/expected.html
+++ b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/aclu/expected.html
@@ -107,7 +107,7 @@
Opting out?
- Some advertisers claim that you can "opt out" of their targeted advertising, and even offer a centralized place meant to help you do so. However, my experience with these tools isn't a positive one. They don't appear to work all of the time. (In a recent experiment I conducted, two advertisers’ opt-out mechanisms failed to take effect.) And while advertisers claim to allow the user to opt out of "interest-based ads," it's not clear that the opt-outs govern data collection itself, rather than just the use of the collected data for displaying ads. Moreover, opting out on their terms requires the use of third-party cookies, thereby enabling another mechanism that other advertisers can then exploit.
+ Some advertisers claim that you can "opt out" of their targeted advertising, and even offer a centralized place meant to help you do so. However, my experience with these tools isn't a positive one. They don't appear to work all of the time. (In a recent experiment I conducted, two advertisers’ opt-out mechanisms failed to take effect.) And while advertisers claim to allow the user to opt out of "interest-based ads," it's not clear that the opt-outs govern data collection itself, rather than just the use of the collected data for displaying ads. Moreover, opting out on their terms requires the use of third-party cookies, thereby enabling another mechanism that other advertisers can then exploit.
It's also not clear how they function over time: How frequently do I need to take these steps? Do they expire? How often should I check back to make sure I’m still opted out? I'd much prefer an approach requiring me to opt in to surveillance and targeting.
diff --git a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected-images.json b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected-images.json
index b55555167..d14a5589d 100644
--- a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected-images.json
+++ b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected-images.json
@@ -1,3 +1,4 @@
[
- "http:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2015\/04\/server-crash-640x426.jpg"
+ "https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2015\/04\/server-crash-640x215.jpg",
+ "https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2015\/04\/server-crash-640x426.jpg"
]
\ No newline at end of file
diff --git a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected-metadata.json b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected-metadata.json
index 0594bf203..2cb17ec76 100644
--- a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected-metadata.json
+++ b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected-metadata.json
@@ -1,8 +1,8 @@
{
- "Author": "by Dan Goodin - Apr 16, 2015 8:02 pm UTC",
+ "Author": "Dan Goodin - Apr 16, 2015 8:02 pm UTC",
"Direction": null,
"Excerpt": "Two-year-old bug exposes thousands of servers to crippling attack.",
- "Image": "http:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2015\/04\/server-crash-640x426.jpg",
+ "Image": "https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2015\/04\/server-crash-640x215.jpg",
"Title": "Just-released Minecraft exploit makes it easy to crash game servers",
"SiteName": "Ars Technica"
-}
+}
\ No newline at end of file
diff --git a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected.html b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected.html
index 0aecf6e9f..905e0d157 100644
--- a/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected.html
+++ b/plugins/af_readability/vendor/fivefilters/readability.php/test/test-pages/ars-1/expected.html
@@ -1,12 +1,40 @@
-
-
-
A flaw in the wildly popular online game Minecraft makes it easy for just about anyone to crash the server hosting the game, according to a computer programmer who has released proof-of-concept code that exploits the vulnerability.
-
"I thought a lot before writing this post," Pakistan-based developer Ammar Askar wrote in a blog post published Thursday, 21 months, he said, after privately reporting the bug to Minecraft developer Mojang. "On the one hand I don't want to expose thousands of servers to a major vulnerability, yet on the other hand Mojang has failed to act on it."
-
The bug resides in the networking internals of the Minecraft protocol. It allows the contents of inventory slots to be exchanged, so that, among other things, items in players' hotbars are displayed automatically after logging in. Minecraft items can also store arbitrary metadata in a file format known as Named Binary Tag (NBT), which allows complex data structures to be kept in hierarchical nests. Askar has released proof-of-concept attack code he said exploits the vulnerability to crash any server hosting the game. Here's how it works.
-
-
The vulnerability stems from the fact that the client is allowed to send the server information about certain slots. This, coupled with the NBT format’s nesting allows us to craft a packet that is incredibly complex for the server to deserialize but trivial for us to generate.
-
In my case, I chose to create lists within lists, down to five levels. This is a json representation of what it looks like.
-
rekt:{
+
+
+
+ Biz & IT —
+
+
+
+ Two-year-old bug exposes thousands of servers to crippling attack.
+
+
+
+
+
+
+
+
+
+
+
+
+ A flaw in the wildly popular online game Minecraft makes it easy for just about anyone to crash the server hosting the game, according to a computer programmer who has released proof-of-concept code that exploits the vulnerability.
+
+
+ "I thought a lot before writing this post," Pakistan-based developer Ammar Askar wrote in a blog post published Thursday, 21 months, he said, after privately reporting the bug to Minecraft developer Mojang. "On the one hand I don't want to expose thousands of servers to a major vulnerability, yet on the other hand Mojang has failed to act on it."
+
+
+ The bug resides in the networking internals of the Minecraft protocol. It allows the contents of inventory slots to be exchanged, so that, among other things, items in players' hotbars are displayed automatically after logging in. Minecraft items can also store arbitrary metadata in a file format known as Named Binary Tag (NBT), which allows complex data structures to be kept in hierarchical nests. Askar has released proof-of-concept attack code he said exploits the vulnerability to crash any server hosting the game. Here's how it works.
+
+
+
+ The vulnerability stems from the fact that the client is allowed to send the server information about certain slots. This, coupled with the NBT format’s nesting allows us to craft a packet that is incredibly complex for the server to deserialize but trivial for us to generate.
+
+
+ In my case, I chose to create lists within lists, down to five levels. This is a json representation of what it looks like.
+
The root of the object, rekt, contains 300 lists. Each list has a list with 10 sublists, and each of those sublists has 10 of their own, up until 5 levels of recursion. That’s a total of 10^5 * 300 = 30,000,000 lists.
-
And this isn’t even the theoretical maximum for this attack. Just the nbt data for this payload is 26.6 megabytes. But luckily Minecraft implements a way to compress large packets, lucky us! zlib shrinks down our evil data to a mere 39 kilobytes.
-
Note: in previous versions of Minecraft, there was no protocol wide compression for big packets. Previously, NBT was sent compressed with gzip and prefixed with a signed short of its length, which reduced our maximum payload size to 2^15 - 1. Now that the length is a varint capable of storing integers up to 2^28, our potential for attack has increased significantly.
-
When the server will decompress our data, it’ll have 27 megs in a buffer somewhere in memory, but that isn’t the bit that’ll kill it. When it attempts to parse it into NBT, it’ll create java representations of the objects meaning suddenly, the sever is having to create several million java objects including ArrayLists. This runs the server out of memory and causes tremendous CPU load.
The fix for this vulnerability isn’t exactly that hard, the client should never really send a data structure as complex as NBT of arbitrary size and if it must, some form of recursion and size limits should be implemented.
-
These were the fixes that I recommended to Mojang 2 years ago.
-
-
Ars is asking Mojang for comment and will update this post if company officials respond.
-
\ No newline at end of file
+}
+
+
+ The root of the object, rekt, contains 300 lists. Each list has a list with 10 sublists, and each of those sublists has 10 of their own, up until 5 levels of recursion. That’s a total of 10^5 * 300 = 30,000,000 lists.
+
+
+ And this isn’t even the theoretical maximum for this attack. Just the nbt data for this payload is 26.6 megabytes. But luckily Minecraft implements a way to compress large packets, lucky us! zlib shrinks down our evil data to a mere 39 kilobytes.
+
+
+ Note: in previous versions of Minecraft, there was no protocol wide compression for big packets. Previously, NBT was sent compressed with gzip and prefixed with a signed short of its length, which reduced our maximum payload size to 2^15 - 1. Now that the length is a varint capable of storing integers up to 2^28, our potential for attack has increased significantly.
+
+
+ When the server will decompress our data, it’ll have 27 megs in a buffer somewhere in memory, but that isn’t the bit that’ll kill it. When it attempts to parse it into NBT, it’ll create java representations of the objects meaning suddenly, the sever is having to create several million java objects including ArrayLists. This runs the server out of memory and causes tremendous CPU load.
+
+ The fix for this vulnerability isn’t exactly that hard, the client should never really send a data structure as complex as NBT of arbitrary size and if it must, some form of recursion and size limits should be implemented.
+
+
+ These were the fixes that I recommended to Mojang 2 years ago.
+
+
+
+ Ars is asking Mojang for comment and will update this post if company officials respond.
+