As you probably know, Alfresco is able to extend it’s data model with one or more custom models. This is incredibly powerful because you can store your own metadata (called properties) together with your documents. Also the folders could be stored (and retrieved) together with custom properties.
So, once you have stored the documents and folder with success into your Alfresco, how to know how many of them are in a certain state instead of another? How to develop and share reports and analytics about it? Last but not least, how to analyze and control the status of your workflows instances?
Easy like a 1… 2… 3!
They say that those who do not learn from history are doomed to repeat it, and in the case of casino software developers, this could not be more true. As with other congested markets, innovation is key to being successful in the online casino industry, and to be able to bring something totally new to the market, one must look back at what has already been done before.
One cannot blame developers and operators for wanting to enter the online casino industry, as global projections have estimated that the industry will grow by 79% by the end of 2014. The market has been steadily growing, from earning $7.4 billion in 2003 to $37.6 last year. Casino software has undergone great changes since it was first launched in 1996, and it has become increasingly important to stay up to date on these changes.
Casino software developers have done well to build on the security measures pioneered by Cryptologic when they launched Intercasino in 1996. Advanced for its time, Cryptologic had been the first to develop a payment method that wasn’t just safe and secure, but also fast and reliable. To this day, a safe and secure payment method is still the first thing that players check when visiting an online casino. A great idea is to link your payouts to Paypal accounts, but idealware also has some great ideas for safely processing credit card payments.
Nobody likes playing the same games over and over again, and while every gamer will have a favorite slot game or a card them they’ll always fall back on, new games rake in the cash. This is the reason why some casino operators have resorted to developing and releasing new games every Wednesday – with great rewards. When developing new slots, vary the style and gameplay of each slot, making sure to create something for every kind of gamer imaginable out there.
Perhaps one of the most underrated aspects of designing or upgrading software is user feedback. Many have debated the saying that “The customer is always right,” but when it comes to a heavily user-driven network like the online casino market, listening to user feedback is essential. Many casinos have, in the past, experienced sit-outs and protests, as they struggled to implement measures that their players were vehemently against. Of course, this also means striking a balance between giving customers what they want, and still making a profit.
Developers of casino software may think that there is very little they can offer to the next generation of gamblers, and the idea that everything has been done before may be truer than we’d like to admit. But by listening to customer feedback and building on already-established strengths, casino software could still be revolutionary.
Brian James Smith has been an active player in online casinos since the dawn of the empire. While he has retired his hopes of ever winning big money in online gambling, he still continues to watch out for the latest updates from casino software developers from his Sin City apartment.
If you are interested in Hadoop technology probably this is an interesting video course you should evaluate. As you probably know, Apache Hadoop is an open-source software framework for storage and large-scale processing of data-sets on clusters of commodity hardware. All the modules in Hadoop are designed with the assumption that hardware failures are common and thus should be automatically handled in software by the framework.
Talking about the video course, we can divide the content in three main macro-sections:
1. how to create and set up a three machines cluster using Amazon EC2,
2. how to install an Hadoop cluster using Apache Ambari,
3. how to start using Hadoop cluster, in particular with Apache Hadoop User Interface (HUE).
The description of all the topics is clear and well done (Sean Mikha, the author, did a good job). All the relevant topics are always detailed before with an explanation of the logic structure and approach and only after with a demostration on how to do it in practice.
Useful also for other purposes, the creation of the virtual machines on Amazon EC2. The practical description and the step by step creation, is not limited to the server’s creation but is detailed also in what concerns the security and connection using, for example, putty ssh client.
In my opinion the most relevant value of this video course is on the hidden details of the Hadoop cluster installation process. As you will see if you will decide to follow it, the tasks are quite easy to do (probably this a Sean’s merit) but the configuration details and settings are very important if you want to make it work in practice. Following the hints I’m sure every neophyte will gain days of work and lot of nights in googling. 😉
Thanking the contribution of some of you, a new minor release of the Pentaho Data Integration plugin has been published.
The change log page describes some more details but the most important feature is about the improved performance for massive extractions (expecially during iterations). Some relevant bugfixes have been released hoping to solve the most important user problems.
I would like to inform about the migration of the repository of the project from the defunct Google Code, to sourceforge.
Last but not least, soon the new version of the plugin will be included in the AAAR to share the benefits.
During the next hours the new version will be available in the Pentaho marketplace… stay tuned!
I want to explicitly thank the “reporting guys” for the collaboration, the attendees for the enthusiastic feedback and the Alfresco team and management because they were in the session giving us very important feedback and suggestions. In particular: John Newton, John Iball, Brian Remmington (always precious with his suggestions) and, of course, Jeff Potts.
Did you know that the reporting and analytics is in Alfresco Roadmap described during the Summit? No? Now you know it! 😉
Below a first photo of the event and soon some news from the event.