Friday, 21 April 2017

Cloud Mining - How Much Passive Income Can You Make?

About Cryptocurrency Mining

Cloud mining is profitable, I have tried it. The question is, what is the return on investment and when will you get your money back? Considering mining fees, contract duration and many other factors, it's hard to guess which site offers the best cloud mining contracts. And what about reinvestment opportunities?

Well, rather than guessing, I've spent a bit of my money to find out the real return of such contracts. I have used and Genesis Mining, which I trust most. Their helpdesk was responsive when I had questions. Recently, I have started working with MyCoinCloud too.

This post is about sharing my observations.

Warning: Cryptocurrencies have been very volatile since late 2016. The figures I am about to share should be interpreted with caution. I have seen a lot of fluctuations. These numbers may not be valid anymore in a couple of days, weeks or months. I will update them from time to time (last update: July 9th, 2017).

Cloud Mining Comparison

Contract prices are not included below, since they often change according to market conditions. The profitability indicates the observed amount of coins produced per day for a given processing power and after fees deduction (i.e., what comes back in your pocket).

Break-even is the estimated amount of time required to get your investment back. In other words, if you put 1$ in mining contracts, how much time does it take to get your 1$ back? This figure is probably the most volatile one, as plenty of factors influence it. I'll describe them later in this post.

Break-even is computed according to current contract price, excluding any promotions, coupons or discount for bulk buying. Max is the maximum observed break-even since I have started investing in online mining contracts.

When I initially wrote this post (which I update here and there), I provided data for fixed-term mining contracts. I have stopped monitoring these and decided to focus on lifetime contracts only. Therefore, the data provided below is only for lifetime contracts.  

Lifetime Mining Contract Profitability

Currency Company Profitability Power Break-Even (Max)
Bitcoin Hashflare 0.00019200 BTC  1 TH/s 8.2 months 19 months
Litecoin Hashflare 0.00003284 BTC 1 MH/s 5.4 months 17 months
Bitcoin Gen-Mining 0.00029004 BTC 1 TH/s 6.8 months 16 months
Ethereum MyCoinCl. 0.00045266 ETH 1 MH/s 5.9 months 7 months
ZCash MyCoinCl. 0.00334300 ZEC  100 H/s 6.6 month 7 months

Genesis Mining pays on a daily basis. MyCoinCloud pays on a weekly basis. For Hashflare, payment transfers are manual.

Which Factors Influence Mining Profits?

  • Mining Difficulty - This is a parameter influencing the productivity of mining servers. The more people are mining a cryptocurrency, the higher its difficulty. The higher its difficulty, the more effort is required to produce a coin, and vice-versa. This parameters helps regulating the production of coins. It tends to follow the price fluctuations of cryptocurrencies, with some delay.
  • Contract Price - The general trend is an inverse correlation with a coin's mining difficulty and a positive correlation with a coin's value against a traditional currency (say USD). Since more and more people are mining coins, the mining difficulty rises. In order to keep their offer valuable, companies lower their contract price. There is a notorious exception. Litecoin contracts at have gone from 9.9$ to 6.5$, then up to 13.5$ due to the Litecoin breakthrough in the first half of 2017.
  • Mining Fees - Fixed term contracts tend to have no mining fees, as they are already priced into the contract. However, lifetime contracts have a daily fee per computing power to cover for the electricity (amongst others). Old mining material tends to consume more electricity than recent one. Typically, lifetime contracts produce coins as long as they are profitable. Then, the material is decommissioned.
  • Pool Fees - Some companies offer the possibility to select mining pools. Having performed some tests, I did not notice significant differences between them, except for the smaller ones whose profitability is more unpredictable and sometimes lower. I recommend avoiding pools not clearly publishing their fees.
  • Cryptocurrency Value - Bitcoin has seen its value rise from less than 700$ to more than 1200$ in about 4 months (Nov. 2016 to Feb. 2017). Since mining contracts produce cryptocoins, this factor is the most influential regarding profitability measured again traditional currencies. It can also heavily influence the mining difficulty.
  • ASIC Electronic Cards - These are electronic components designed for the sole purpose of mining cryptocurrencies. Each cryptocurrency is uses a given algorithm. Some of these can be implemented into ASIC cards in order to boost processing power and to reduce electricity consumption (Bitcoin, Litecoin, Dash). However, this is not (or hardly) possible for other currencies (Monero, ZCash, Ethereum). For the former, this means mining material becomes obsolete faster, while difficulty often rise faster to regulate production. For the latter, the corresponding difficulty does not fluctuate much.
  • Halving - Some cryptocurrencies see their mining reward halve from time to time (Bitcoin likely in June 2020, Litecoin in August 2019, Zcash in October 2020). These dates can only be estimated. Halvings put a sudden stress on older material profitability. For other currencies, Dash sees a mining reward decrease of 7% per year, while Monero sees a small decrease after each block. 

Warnings & Recommendations

  • Online mining is not the only way to invest in cryptocurrencies - If one believes the value of a currency will rise, one may as well buy some and wait, rather than invest in mining contracts. Trading coins has been more profitable than mining contracts between November 2016 and June 2017, thanks to a spectacular rise. However, this rise has reached a plateau and future rises are unlikely to be as sharp.   
  • Coupons and promotions mitigate risks - A 10% or 15% coupon has a dramatic impact on break-even. Use them to mitigate the risk of a constant rise in difficulty and/or decrease of currency value (in USD for example). However, be careful. If you Google for some coupons for, say, Genesis Mining, some advertising mentions between 3% and 10%, while in reality, it is only 3% and they know it.
  • Short-term vs Long-term - The global trend is up for the most important cryptocurrencies. Trading their value makes sense, especially because of the high volatility. Mining contracts are not the best option for short-term objectives, but IMHO, they excel at mid to long term objectives. They provide profitability with peace of mind. You don't need to spend all your time in front of your laptop, chasing for trading opportunities.

Reinvesting In Mining Contracts offers the possibility to automatically (or manually) reinvest produced coins into extra mining contracts. I did the maths for fixed term contracts but I don't see any value here, especially since it extends the break-even period. I am not saying there is no possibility for profits, but the extra risk is not worth it IMHO. Greed has already wiped out so many investors, I don't want to be the next one the list...  

For lifetime contracts, it's a different game. After an investment period (say 1 year), you still hold some processing power and that has a value. I did some maths and computed the cash flow value of production with a 60% yearly discount (i.e., if it is worth 100$ now, it will be worth 100 * 40% = 40$ the next year, and 40 * 40% = 16$ the following year, etc...). 60% might seem high for some, but remember about halving and the constant rising of difficulty for some cryptocurrencies. I would rather be conservative and safe, than sorry.

Well, the outcome is that even with a high discount, reinvestment in lifetime contracts offers pretty good value. I see two strategies for beginners here: the very safe approach by which one does not reinvest anything before reaching break-even, and the cautious approach which is to not reinvest more than what you have already recovered. Say you have invested 100$ and regained 35$, you would not reinvest more than 35 / 100 = 35% of future coins produced by your processing power.

Now that I have recovered my investments, I am using a full re-investment strategy, since I am interested in maximizing long-term benefits.  

How To Get Started With Cloud Mining?

One issue to tackle are wallets. It is technically complicated to hold them on your laptop. Considering I mine several currencies, I have found Cryptonator to be a good solution (but with some caveats, read the warning **). Although Cryptonator offers cryptocurrency conversions, I found Changelly to offer better conversion rates. For my Bitcoin wallet and for conversion to EUR and SEPA wire transfers, I use Bitwala.

Then, buy your first online mining contract either at, Genesis Mining or MyCoinCloud too (*). You can shave off 3% of the purchase price at Genesis Mining by using this permanent coupon: 6M1WUC. offers temporary coupons from time to time. These are published on their website and on their Facebook page.

If you enjoyed this post, please share it or like it !!! Thanks !!!

(*) Disclaimer: I do participate in the affiliation programs of, Genesis Mining, Cryptonator, Bitwala and Changelly. If you register with the links in this post, both you and I will get some benefits (sometimes immediate, sometimes deferred, sometimes after doing some business with them).

(**) Warning: Some of my Bitcoin transactions with Cryptonator have been waiting for confirmation for several days now because the transfer fee was too low (***). Unfortunately, Cryptonator does not let one set its own transfer fee. This resulted in failed EUR SEPA transactions too. I had to insist to finally receive the money on my bank account.

(***) Bitcoin has grown so popular that the system has been having issues to process all transactions in a timely manner between May and June 2017. For several months, involved parties did not agree on the solution to this looming issue. Recently (May 2017), they came to a agreement.

Sunday, 14 August 2016

Docker Concepts Plugged Together (for newbies)

Although Docker looks like a promizing tool facilitating project implementation and deployment, it took me some time to wrap my head around its concepts. Therefore, I thought I might write another blog post to summarize and share my findings.

Docker Container & Images

Docker is an application running containers on your laptop, but also on staging or production servers. Containers are isolated application execution contexts which do not interfere with each other by default. If something crashes inside a container, the consequences are limited to that container. There is a possibility to open ports in a container. Such containers can interact with the external world via such ports, including other containers having opened ports.

You can think about a Docker image as a kind of application ready to be executed in a container. In fact, an image can be more than just an application. It can be a whole linux environment running the Apache server and a website to test for example. By opening port 80, you can browse the content as if Apache and the website were installed on your laptop. But they are not. They are encaspulated in the container.

Docker runs in many environments: Windows, Linux, Mac. One starts, stops and restarts a container with docker using available images. Each container has its private file system. One can connect and 'enter' the container via a shell prompt (assuming the container is running Linux for example). You can add and remove files to the container. You can even install more software. However, when you delete the container, these modifications are lost.

If you want to keep these modifications, you can create a snapshot of the container, which is saved as a new image. Later, if you want to run the container with your modifications, you just need to start a container with this new image.

In theory, it is possible to run multiple processes in a container, but it is not considered a good practice.

Docker Build Files & Docker Layers

But how are Docker images created in the first place? In order to create an image, you need to install Docker Composer on your laptop. Then, in a separate directory, you'll create a Dockerfile file. This file will contain instructions to create the image.

Most often, you don't create an image from scratch, you rely on an existing image, for example Ubuntu. This is the 1st layer. Then, as Docker Compose processes each line from Dockerfile, each corresponding modification creates a new layer. It's like painting the wall. If you start with a blue background, and then paint some parts in red, the blue disappears under the red.

Once Docker Compose has finished its job, the image is ready. A Docker image is a pile of layers (in other words). Each time you launch a container, Docker simply copies the composed image in the container for execution. It does not recreate it from scratch.

Docker Volumes & Docker Registry

A Docker registry is simply a location where images can be pushed and stored for later use. There is a concept of version and latest image version. There is a public Docker repository, but one can also install private registries.

A volume is a host directory located outside of a Docker container file system. It is a mean to make data created by a container in one of its directory available in the external volume directory on your laptop. There is a relationship created between this inner container directory and the external directory on the local host. A volume 'belonging' to a container can be accessed by another container using proper configuration. For example, logs can be created by one container and processed by another. It is a typical use of volumes.

Contrary to containers, if a container is erased, the data in its volume directory is never explicitly deleted. It can be accessed again later by the same or by other containers.

There is also a possibility to mount a local host directory into a container's directory. This will make the content of the local host directory available in the container. In case of collision, the mounted data prevails on the container's data. It's like a poster on the blue wall. However, when the local host directory is unmounted, the initial container data is available again. If you remove the poster, that part of the wall is blue again.

But, Why Should I Use Docker?

Dockers brings several big benefits. One of them is that you don't need to install and re-install environments to develop and test new applications, which saves a lot of time. You can also re-use images by building your images on top of giants. This also saves a lot of time.

However, the biggest benefit, IMHO, is that you are guaranteed to have the same execution environment on your laptop as on your staging and production server. Hence, if a developer works under Windows 10 and another on Mac, it does not matter. The mitigates the risk of facing tricky environment bugs at runtime.

Hope this helped.

Saturday, 26 September 2015

Explain React Concepts & Principles, Because I Am Not A UI Specialist

I have been reading React's documentation, but found it to take too many shortcuts regarding the descriptions of concepts and how they related to each other to understand the whole picture. It is also missing a description of the principles it relies on. Not everyone is already a top-notch Javascript UI designer. This post is an attempt to fill the gaps. I am assuming you know what HTML, CSS and Javascript are.

What Issues Does React Try To Solve?

Designing sophisticated user interfaces using HTML, CSS and Javascript is a daunting task if you write all the Javascript code by yourself to display, hide or update parts of the screens dynamically. A lot of boilerplate code is required, which is a hassle to maintain. Another issue is screen responsiveness. Updating the DOM is a slow process which can impact user experience negatively.

React aims at easing the burden of implementing views in web applications. It increases productivity and improves the user experience.

React Concepts & Principles

React uses a divide and conquer approach using components. In fact, they could be called screen components. They are similar to classes in Object Oriented Programming. It's a unit of code and data specialized in the rendering of a screen part. Developing each component separately is an easy task, and the code can be easily maintained. All React classes and elements are implemented using Javascript.

Classes & Components

With React, you will create React classes and then instantiate React elements using these classes. React components can use other React components in a tree structure (just like the DOM structure is a tree structure too). Once an element is created, it is mounted (i.e. attached) to a node of the DOM, for example, to a div element having a specific id. The React component tree structure does not have to match the DOM structure.

No Templates

If you have developed HTML screens using CSS, it is likely you have used templates to render the whole page or parts of it. Here is something fundamentally different in React: it does not use templates. Instead, each component contains some data (i.e., state) and a method called render(). This method is called to draw or redraw the parts of the screen it is responsible for. You don't need to compute which data lines were already displayed in a table (for example), which should be updated, which should be deleted, etc... React does it for you in an efficient way and update the DOM accordingly.

State & Previous State

Each component has a state, that is, a set of keys and values, also called properties. It is possible to access the current state with this.state. When a new state is set, the render() method is called automatically to compute parts of the screen which have to be updated. This is extremely useful when JSON data is fetched with an Ajax. You just need to set it in corresponding React components and let React perform screen updates.

JSX & Transpilation

Creating a tree of React UI components using Javascript means writting lengthy-ish code which may not always be very readable. React introduces JSX which is something between XML/HTML and Javascript. It provides a mean to create UI component trees with concise code. Using JSX is not mandatory.

On the downside, JSX need to be translated into some React-based Javascript code. This process is called transpiling (as opposed to compilation) and can be achieved with Babel. It is possible to preprocess (i.e., pre-transpile) JSX code on the server side and only deliver pure HTML/CSS/Javascript pages to the browser. However, the transpilation can also happen on the user side. The server sends HTML/CSS/Javascript/JSX pages to the browser, and the browser transpiles the JSX before the page is displayed to the user.

That's it! You can now dive into React's documentation. I suggest starting with Thinking In React. It provides the first steps to design and implement React screens in your applications. I hope this post has eased the React learning curve!

Monday, 14 October 2013

Creating An OpenShift Web/Spring Application From The Command Line

OpenShift offers online functionalities to create applications, but this can also be achieved from command line with the RHC Client Tool. For windows, you will first need to install RubyGems and Git. The procedure is straightforward.

Git SSL Communication 

OpenShift requires SSL communication between local Git repositories and the corresponding server repositories. Generating SSH Keys for TortoiseGit on windows can be tricky, but this post tells you how to achieve this.


From time to time, run the following commands for RubyGems and RHC updates:

> gem update --system
> gem update rhc

Creating the Spring application

Under Windows, open a cmd windows and go to the directory where you want to create the application. Assuming you want to call it mySpringApp, run the following command:

> rhc app create mySpringApp jbosseap-6

The application will be automatically created and the corresponding Git repository will be cloned locally in your directory.

'Unable to clone your repository.'

If you encounter the above error, you will need to clone the Git repository manually. Assuming that TortoiseGit for Windows has been installed properly and that you have generated your SSL keys for Git properly, right-click on the directory where you want to clone the Git repository:

OpenShift git cloning manually

Enter the SSL URL in the first field (you can find itL under 'My Applications' in OpenShift). Make sure you check Load Putty Key and that the directory points at you .ppk file.

This solution has been made available on StackOverflow too.

Making it a Spring Application

To transform the above application into a Spring application, follow instructions available here.

If you cannot execute Git from the command line, it is most probably not in the classpath. You will need to add it and open a new command line window.

That's it, you are ready to go. Open the application in your favorite IDE. Don't forget to (Git) push the application to make it accessible from its OpenShift URL.

Tuesday, 1 October 2013

September 21-22-23, 2103 - Search Queries Not Updated in Google Webmaster Tools

This week-end, many people have started reporting the same issue in Google's Webmaster Forum: no more daily search queries information updates. For most, the data reporting stopped on September 23, 2013, but I have observed this since September 22, 2013.

Yesterday, a top contributor has announced that this issue had been "escalated to the appropriate Google engineers". He mentions this issue started on September 21st. Therefore, it has been 9 days before someone could confirm that Google is aware of it. Google Webmaster Tools (GWT) is known to lag 2 or 3 days behind when it comes to search query data, which explains why most webmasters only started to ask questions at the end of last week. This issue made the headline of Search Engine Roundtable too.

In the confirmation post, a link to a 2010 video has been posted. Matt Cutts discusses which types of webmaster tools errors should be reported to Google. He mentions that Google engineers are a bit touchy when they are asked whether they monitor their systems. So did Google knew about this issue since September 21st and deliberately decided not to answer posts in the Webmaster Tools forum for 9 days or did they just miss it, because it was not monitored?

Many people have been hit by the recent Panda updates. August 21st, September 4th and more recent dates have triggered a lot comments in forums. Many websites lost all their traffic without any explanation. No message in GWT, no manual penalty, nothing. Some of these sites were using plain white hat SEO. Webmasters working hard to produce quality content need GWT search query data feedback, especially when they believe some of their sites have been hit by recent updates. It helps them find out whether they have implemented the proper corrections or not.

On September 11th, a new Matt Cutts video was posted about finding out whether one has been hit or not by Panda, and whether one has recovered from it or not. Unfortunately, it does not contain clear cut information answering the question. This video only confirms that Panda is now integrated into indexing and that one should focus on creating quality content. Google's interpretation of quality content is still vague, yet they have implemented algorithms to sort web pages.

If there is a bug impacting customers using their service, why isn't Google officially open and communicative about it? This has been an ongoing complaining from webmasters. I can understand that Google does not want to give too much information about their systems. They don't want hackers too exploit these against them. However, it clearly seems that the focus is more on not communicating with hackers than communicating openly with regular webmasters. Is Google on the defensive mode?

Google is capable of algorithmically detecting when a website (or some part of a website) has quality issues. It does not hesitate to penalize such websites. Then, why doesn't Google communicate automatically about these issues to regular webmasters in GWT? It is algorithmically possible and scalable too. Google is not the only party interested in creating quality websites. It is in the interest of regular webmasters too. Of course, hackers would try to exploit this information, but overall, if regular webmasters had this information too, they would create better content than hackers too. Users would still sort between good and bad websites, not only Panda.

Sometimes, it really seems like Google does not truly want to collaborate with regular webmasters. I notice selective listening followed by monologues. Ask me questions and I'll answer them. I won't acknowledge any flaws, but I'll secretly work on these so you can't poke me again. This is not a collaborative dialogue, it is a defensive attitude. I believe that acting with excessive caution directly hampers the achievement of one's own objectives.

My strong opinion is that if Google solved this communication issue, it would bring much more return than any other stream of tweaks to their Panda algorithm. Give people the information they need to do a good job, empower them, trust them. Right now, the level of frustration is pretty high in the webmaster community. Frustration leads to lack of motivation. Lack of motivation decreases productivity. No productivity means not a chance to see new quality content or improvements.

There is a needless vicious circle and Google can do something about it, for its own good too.

Monday, 9 September 2013

Best Responsive Design Breakpoints

While trying to find an answer to my own question: "What are the best responsive design breakpoints?", I have performed a small statistical study over SmartPhone screen widths (portrait and landscape) using information provided by i-skool.

SmartPhone Screen Width Study (Portrait & Landscape)

The above table shows how often a specific SmartPhone width in the source data is reported. There are five peaks:
  • 320 pixels
  • 480 pixels
  • 768-800 pixels
  • 1024 pixels
  • 1280 pixels
These look like good responsive design breakpoint candidates.

Best Google Ad Formats

Google offers several ad formats. Assuming the following breakpoints, here are examples of adequate ad formats with regards to width:
  • 320 to 479 pixels - Mobile Leaderboard (320x50), Half Banner (234x60), Medium Rectangle (300 x 250)
  • 480 to 767 pixels - Banner (468 x 60), (468x15) Displays 4 links
  • 768 pixels and above - Leaderboard (728 x 90), (728x15) Displays 4 links

Saturday, 7 September 2013

Sep 4th, 2013 - Sudden Drop In Traffic - A Thin Or Lack Of Original Content Ratio Issue?

Many people have reported a sudden drop in traffic to their websites since September 4th, 2013.

Google Webmaster forum is full of related posts. A Webmaster World thread has been started. Search Engine Roundtable mentions 'early signs of a possible major Google update'. A group spreadsheet has been created. No one seems to make sense of what is happening. There is a lot of confusion and speculation, without any definitive conclusion.

I have seen a major drop in traffic on a new website I am working on since then. However, the traffic for this blog has remained the same. No impact.

I am going to post relevant information and facts here as I find them. If you have any relevant or conclusive information to contribute, please do so in the comments and I will include them here. Let's try to understand what has happened.


Facts & Observations

  • Many owners claim no black hat techniques, no keyword stuffing, only original content, legitimate incoming links.
  • Many owners say they have not performed any (or significant) modifications to their website.
  • All keywords and niche are impacted.
  • Both old and new websites are impacted.
  • Both high and low traffic websites are impacted.
  • Some blogs are also impacted.
  • It is an international issue, not specific to a country, region or language.
  • Site with few backlinks are also penalized, not only those with many backlinks.
  • Nothing changed from a Yahoo or Bing ranking perspective.
  • One person mentions a site with thin content still ranking well.
  • At least one website with valuable content has been penalized.
  • Several sites acting as download repositories have been impacted.
  • Some brands have been impacted too.
  • So far, Google has nothing to announce.
  • In May 2013, Matt Cutts announced that Panda 2.0 is aiming at getting better at fighting blackhat techniques. He also announced that upcoming changes include better identification of websites with higher quality content, higher authority and higher trust. Google wants to know if you are 'an authority in a specific space'. Link analysis will be more sophisticated too.

Example Of Impacted Websites

  7. (adult content)
  29. (adult content)


  • Websites with content duplicated on other pirate websites are penalized.
  • Websites with little or no original or badly written content are penalized (thin content vs plain content ratio).
  • Websites with aggregated content have been penalized.
  • Sites having a bad backlink profile have been penalized.
  • Sites having outbound links to murky site or link farms have been penalized.
  • Ad density is an issue.
  • Google has decided to promote brand websites.
  • This is a follow-up update to the August 21st/22nd update, at a broader scale or deeper level.
  • An update has been posted and contains a bug (or a complex, unanticipated and undesirable side effect).


Using collected information and data gathered in the group spreadsheet:
  • Average drop in traffic is around 72%
  • No one reports use of black hat techniques
  • 12,8% report use of grey hat techniques
  • 23,1% report impact before 3rd/4th September
  • 7,7% have an EMD
  • 17,9% had a couple of 404 or server errors
  • 17,9% are not using AdSense
  • 30,8% admit thin content
  • 38,5% admit duplicate content
  • 25,6% admit aggregate content
  • 15,4% admit automatically generated content
  • 64,1% admit thin or duplicate or aggregate or automatically generated content
  • The range of backlinks is 10 to 5.9 millions
  • The range of indexed pages is 45 to 12 millions
The spreadsheet sample contains only 39 entries, which is small.
  1. The broad range for the number of backlinks seems to rule out a pure backlink (quality or amount) issue.
  2. The broad range of indexed pages points at a quality issue, rather than a quantity issue.
  3. More than 92% do not have an EMD, so this rules out a pure exact domain name issue.
  4. More than 82% did not have server or 404 issues, so this rules out them as the main cause for a quality issue.
  5. 17,9% are not using AdSense, meaning this cannot be a 'thin content above the fold' or 'too many ads above the fold' issue only.
  6. Some brand websites have been impacted. Therefore, it does not seem like Google tries to promote them over non-brand websites.
  7. Domain age, country or language are not discriminating factors.

    Best Guess

    By taking a look at the list of impacted websites and the information gathered so far, it seems like we are dealing with a Panda update where sites are delisted or very severely penalized in search rankings because of quality issues.

    These are likely due to thin content, lack of original content, duplicate content, aggregate content or automatically generated content, or a combination of these. It seems like a threshold may have been reached for these sites, triggering the penalty or demotion.

    Regarding duplicate content, there is no evidence confirming for sure that penalties have been triggered because a 3rd party website stole one's content. More than 60% do not report duplicate content issues.

    To summarize it, the September 4th culprit seems to be a high thin or lack of original content ratio issue, leading to an overall lack of high quality content, leading to a lack of trust and authority in one's specific space.

    Unfortunately, Google has a long history of applying harsh mechanical decisions on websites without providing any specific explanation. This leaves people guessing what is wrong with their websites. Obviously, many of the impacted websites are not products of hackers or ill willed people looking for a 'I win - Google looses' relationship.

    Some notifications could be be sent in advance to webmasters who have registered to Google Webmaster Tools. If webmasters do so, it can only mean they are interested in being informed (and not after the facts). This would also give them an opportunity to solve their website issues and work hand-in-hand with Google. So far, there is no opportunity or reward system to do so.

    Possible Solutions

    Someone from the Network Empire claims that Panda is purely algorithmic and that it is run from time to time. If this is true, then this might explain why no one received any notifications or manual penalty in Google Webmaster Tools, and why no one will.

    Google might just be waiting for people to correct issues on their websites and will 'restore' these sites when they pass the Panda filter again. The up side is that this update may not be as fatal as it seems to be.

    Assuming the best guess is correct, the following would help solving or mitigating the impact of this September 4th update:
    • Re-read Dr. Meyers' post about Fat Panda & Thin content.
    • Thin content pages should be marked as noindex (or removed from one's website) or merged into plain/useful/high quality content pages for users.
    • Low quality content (lots of useless text) pages should preferably be removed from the website, or at least be marked as noindex.
    • Internal duplicate content should be eliminated by removing duplicate pages or by using rel="canonical" (canonical pages).
    • Content aggregated from other websites is not original content. Hence, removing these pages can only help (or at least, these page should be marked as noindex).
    • Not enough valuable content above the fold should be solved by removing excessive ads, if any.
    • Old pages not generating traffic should be marked as noindex (or removed).
    • Outbound links to bad pages should be removed (or at least marked as nofollow), especially if they do not contribute to good user experience. This helps restore credibility and authority.
    • Disavow incoming links from dodgy or bad quality websites (if any). One will loose all PageRank benefit from those links, but it will improve their reputation.
    • Regarding Panda, it is known (and I'll post the link when I find it again) that one bad quality page can impact a whole website. So being diligent is a requirement.
    • Lorel says that he has seen improvement on his client websites after de-optimizing and removing excessive $$$ keywords.
    Something to remember:
    • Matt Cutts has confirmed that noindex pages can accumulate and pass PageRank. Therefore, using noindex may be more interesting than removing a page, especially if it has accumulated PageRank and if it has links to other internal pages.