terça-feira, 27 de janeiro de 2015

Power BI now Free, New Power BI Pro $9.99/user a month!

http://ift.tt/1tlqGwY

Today Microsoft is again making waves by announcing significant pricing model changes for the Power BI SaaS offerings.

read more



from SQL Server Pro http://ift.tt/1kCjeYg

MariaDB 10.0.16 now available

http://ift.tt/1zUioi0

Download MariaDB 10.0.16 Release Notes Changelog What is MariaDB 10.0? MariaDB APT and YUM Repository Configuration Generator The MariaDB project is pleased to announce the immediate availability of MariaDB 10.0.16. This is a Stable (GA) release. See the Release Notes and Changelog for detailed information on this release and the What is MariaDB 10.0? page in the MariaDB Knowledge Base for general information about the MariaDB 10.0 series. Thanks, and enjoy MariaDB!

from Planet MySQL http://ift.tt/1du18ol

Log Buffer #407, A Carnival of the Vanities for DBAs

http://ift.tt/1uY8A57

This Log Buffer Edition keeps the aim high and brings few of the best blog posts from Oracle, SQL Server and MySQL. Oracle: 3 Modes for Moving Data to the BI Applications DW from a Source Application Database. JSON for APEX Developers. Neelakanth Nadgir posted a useful utility that prints out various statistics about the ZFS Adaptive Replacement Cache (ARC). Obtaining Bonus Depreciation Methods for Oracle Fixed Assets. Existing News – Java Cloud Service just got an update – WebLogic Server 12.1.3 SQL Server: Tracking Database DDL Changes with SQLVer. While a diminished level of control is certainly a factor to consider when contemplating migration of on-premises systems to Microsoft Azure, especially when dealing with PaaS resources such as Azure SQL Database, you have a range of techniques at your disposal that allow you to control and monitor both the status of and access to your Azure-resident services. A Custom Execution Method – Level 19 of the Stairway to Integration Services When a SQL Server database is operating smoothly and performing well, there is no need to be particularly aware of the transaction log, beyond ensuring that every database has an appropriate backup regime and restore plan in place. SQL Server Reporting Services Basics: Deploying Reports MySQL: Mo’ Data, Mo’ Problems FromDual.en: FromDual Backup Manager for MySQL 1.2.1 has been released Tracking MySQL query history in long running transactions MySQL Cluster 7.4.3 RELEASE CANDIDATE now available Streaming MariaDB backups in the cloud.

from Planet MySQL http://ift.tt/1du18ol

MMUG11: Talk about binlog servers at Madrid MySQL Users Group meeting on 29th January 2015

http://ift.tt/1D21RWG

Madrid MySQL Users Group will have its next meeting on Thursday, the 29th of January. I will be giving a presentation on the MySQL binlog server and how it can be used to help scale reads and be used for other purposes.  If you have (more than) a few slaves this talk might be interesting for you.  The meeting will be in Spanish. I hope to see you there. Details can be found on the group’s Meetup page here: http://ift.tt/1D21Qlt La próxima reunión de Madrid MySQL Users Group tendrá lugar el jueves 29 de enero. Ofreceré una presentación sobre el MySQL binlog server y como se puede utilizar para ayudar con la escalada de lecturas a la base de datos y para otros usos. La reunión será en español. Espero veros allí. Se puede encontrar más detalles en la página del grupo: http://ift.tt/1D21Qlt.

from Planet MySQL http://ift.tt/1du18ol

segunda-feira, 26 de janeiro de 2015

Performance issues and fixes -- MySQL 5.6 Semi-Synchrnous Replication

http://ift.tt/15DTLsy

 Long time ago I wrote a blog post that we started using Semi-Synchronous replication in production at Facebook. We are running our own MySQL facebook-5.6 branch, which added many replication (including Semisync) patches. Some of them were also merged with WebScaleSQL. Recently I heard from people from community that they were suffering from Semisync performance issues in official MySQL 5.6. I took some time to review our previous MySQL bug reports, then realized that some important bugs were either still "verified" or inefficiently fixed. Two most affecting bug reports were http://ift.tt/1LdfckO and http://ift.tt/15DTLc4. We fixed both at our branch so I haven't paid much attention after that, but people outside Facebook are certainly affected. In this post, I'm going to describe some effective configurations to get better Semisync throughput on master and slaves, by showing simple benchmark numbers. I used three machines -- client and master and semisync slave --, all running on pure flash. They are located within very close distance. I created 100 databases and enabled Multi-Threaded-Slave, and ran 100 mysqlslap processes for 100 databases, with 30 concurrent connections each (3000 concurrent connections in total). All queries were auto-committed inserts and I used InnoDB storage engine on both master and slaves.1. Set master_info_repository=TABLE MySQL 5.6 and 5.7 have a performance bug that writing FILE based master info (and relay log info) files are very expensive. This is especially serious for Semisync replication, since this bug slows down IO thread. On Semisync, slow IO threads takes longer time to send ACK back to the master, so it slows down master throughput as well. Default master_info_repository is FILE, so without changing this parameter to TABLE, you are affected by this bug. Here are benchmark results between FILE and TABLE.5.6.22 master_info_repository=FILE :  5870/s5.6.22 master_info_repository=TABLE:  7365/s These numbers were the number of commits per second on both master and slave instances. Slave didn't lag for most experiments, thanks to multi-threaded slave feature. Please don't confuse between master_info_repository and relay_log_info_repository parameters. relay_log_info_repository has to be TABLE, otherwise crash safe slave doesn't work. master_info_repository works on both FILE and TABLE, but I suggest to use TABLE for performance reasons.2. Reduce durability on master Older 5.6 had a bug that slaves couldn't continue replication after crashed master's recovery, even if setting fully durable configurations. Here is a closed bug report. This bug report was closed, but it caused some performance regression. Master extended LOCK_log mutex holding duration -- releasing LOCK_log mutex after calling fsync(). This certainly fixed the bug, but caused performance regression because LOCK_log was very hot mutex in 5.6 -- both Binlog Dump thread and application threads need to hold the lock. Hopefully reducing durability (I mean setting sync_binlog=0 and innodb_flush_log_at_trx_commit=0|2) mitigates the regression a lot. When using Semisync replication, you are most certainly to promote a slave on master failure, so durability on master does not matter much == You can reduce durability on master. 5.6.22 master_info_repository=TABLE, full durability:  7365/s5.6.22 master_info_repository=TABLE, less durability:  9800/s3. Loss Less Semisync MySQL 5.7 improved replication performance a lot. Probably the most effective improvement was that Binlog Dump thread no longer held LOCK_log mutex. In addition to that, 5.7 introduced "Loss-Less Semisync" feature. If you read my previous Semisync blog post carefully, you may have noticed that 1. we backported "Loss-Less Semisync" from 5.7, and 2. we got better throughput with Loss-Less Semisync than Normal Semisync. This was because Loss-Less Semisync actually reduced mutex contentions -- LOCK_commit and LOCK_binlog_. My 5.7 benchmark result was as follows.5.7 Normal Semisync:    12302/s5.7 Loss Less Semisync: 14500/s(master_info_repository=TABLE, less durable) Reducing LOCK_log contention on Binlog Dump thread and introducing Loss Less Semisync were major contributors in MySQL 5.7 performance improvements. At Facebook, we ported both in our 5.6 branch. It would be interesting for community if these can be available on other distributions, since using 5.7 in production will not happen anytime soon.In addition to the above three configurations, there are still some considerations to make Semisync throughput not bad / better.4. Semisync mysqlbinlog At Facebook, we implemented mysqlbinlog to speak Semisync protocol, and used it as a Semisync replication reader. On replication slaves, IO thread and SQL thread conflict with internal mutexes. As I mentioned above, slow IO thread slows down Semisync master throughput. Semisync mysqlbinlog doesn't have such slowdown, because it doesn't have SQL thread. So using Semisync mysqlbinlog instead of Semisync slave can improve master throughput.5.7 Loss Less Semisync + Semisync slave: 14500/s (on both master and slave)5.7 Loss Less Semisync + Semisync mysqlbinlog + async slave: 48814/s on master, 15363/s on slave This shows Semisync mysqlbinlog improved master throughput. But this is actually not so good news -- because slave lags a lot. Fundamentally we need to fix mutex contentions between IO thread and SQL threads.5. GTID There are some open performance bugs in GTID. Especially 5.7 one is serious. If you really need high throughput Semisync, you need to carefully benchmark with GTID (and ask Oracle to fix!).Here are whole table definitions and mysqlslap commands I used for benchmark.for i in `seq 1 100`domysql -e "drop database test$i"mysql -e "create database test$i"mysql test$i -e "create table t1 (id int auto_increment primary key, \value int, value2 varchar(100)) engine=innodb"donefor i in `seq 1 100`domysqlslap --host=master --query="insert into test$i.t1 \values (null, 1, 'abcdefghijklmn')" --number-of-queries=100000 --concurrency=30 &done

from Planet MySQL http://ift.tt/1du18ol

Presenting Undrop for InnoDB Toolkit on Percona Live 2015

http://ift.tt/1D06e4q

I’ll be presenting Undrop for InnoDB data recovery toolkit on Percona Live 2015. The conference takes place in Santa Clara on 13-16 April. You may wonder why do I care if I plan to drop none of my production databases. To name few reasons: Taxes, death and data loss are inevitable Good knowledge of how InnoDB stores data on disk help to design fast and scalable databases The toolkit can be used to troubleshoot bugs in InnoDB. So bookmark the session, it’s going to be a great discussion. Undrop for InnoDB | Percona Live MySQL Conference 2015   by The post Presenting Undrop for InnoDB Toolkit on Percona Live 2015 appeared first on Backup and Data Recovery for MySQL.

from Planet MySQL http://ift.tt/1du18ol

How to Bootstrap MySQL/MariaDB Galera Cluster

http://ift.tt/1zjwolJ

January 27, 2015 By Severalnines Unlike standard MySQL server and MySQL Cluster, the way to start a MySQL/MariaDB Galera Cluster is a bit different. Galera requires you to start a node in a cluster as a reference point, before the remaining nodes are able to join and form the cluster. This process is known as cluster bootstrap. Bootstrapping is an initial step to introduce a database node as primary component, before others see it as a reference point to sync up data.   How does it work?   When Galera starts with the bootstrap command on a node, that particular node will reach Primary state (check the value of wsrep_cluster_status). The remaining nodes will just require a normal start command and they will automatically look for existing Primary Component (PC) in the cluster and join to form a cluster. Data synchronization then happens through either incremental state transfer (IST) or snapshot state transfer (SST) between the joiner and the donor.   So basically, you should only bootstrap the cluster if you want to start a new cluster or when no other nodes in the cluster is in PRIMARY state. Care should be taken when choosing the action to take, or else you might end up with split clusters or loss of data.   The following example scenarios illustrate when to bootstrap the cluster:   How to start Galera cluster?   The 3 Galera vendors use different bootstrapping commands (based on the software’s latest version). On the first node, run: Codership: $ service mysql bootstrap Percona XtraDB Cluster: $ service mysql bootstrap-pxc MariaDB Galera Cluster: $ service mysql bootstrap   The above command is just a wrapper and what it actually does is to start the MySQL instance on that node with gcomm:// as the wsrep_cluster_address variable. You can also manually define the variables inside my.cnf and run the standard start/restart command. However, do not forget to change wsrep_cluster_address back again to contain the addresses to all nodes after the start.   read more

from Planet MySQL http://ift.tt/1du18ol

Reading RBR binary logs with pt-query-digest

http://ift.tt/1En1mdP

For purposes of auditing anything that goes on our servers we're looking to parse the binary logs of all servers (masters), as with "Anemomaster". With Row Based Replication this is problematic since pt-query-digest does not support parsing RBR binary logs (true for 2.2.12, latest at this time). I've written a simple script that translates RBR logs to SBR-like logs, with a little bit of cheating. My interest is that pt-query-digest is able to capture and count the queries, nothing else. By doing some minimal text manipulation on the binary log I'm able to now feed it to pt-query-digest which seems to be happy. The script of course does not parse the binary log directly; furthermore, it requires the binary log to be extracted via: mysqlbinlog --verbose --base64-output=DECODE-ROWS your-mysql-binlog-filemame.000001 The above adds the interpretation of the RBR entires in the form of (unconventional) statements, commented, and strips out the cryptic RBR text. All that is left is to do a little manipulation on entry headers and uncomment the interpreted queries. The script can be found in my gist repositories. Current version is as follows: #!/usr/bin/python # # Convert a Row-Based-Replication binary log to Statement-Based-Replication format, cheating a little. # This script exists since Percona Toolkit's pt-query-digest cannot digest RBR format. The script # generates enough for it to work with. # Expecting standard input # Expected input is the output of "mysqlbinlog --verbose --base64-output=DECODE-ROWS <binlog_file_name>" # For example: # $ mysqlbinlog --verbose --base64-output=DECODE-ROWS mysql-bin.000006 | python binlog-rbr-to-sbr.py | pt-query-digest --type=binlog --order-by Query_time:cnt --group-by fingerprint # import fileinput def convert_rbr_to_pseudo_sbr(): inside_rbr_statement = False for line in fileinput.input(): line = line.strip() if line.startswith("#") and "end_log_pos" in line: for rbr_token in ["Update_rows:", "Write_rows:", "Delete_rows:", "Rows_query:", "Table_map:",]: if rbr_token in line: line = "%s%s" % (line.split(rbr_token)[0], "Query\tthread_id=1\texec_time=0\terror_code=0") if line.startswith("### "): inside_rbr_statement = True # The "### " commented rows are the pseudo-statement interpreted by mysqlbinlog's "--verbose", # and which we will feed into pt-query-digest line = line.split(" ", 1)[1].strip() else: if inside_rbr_statement: print("/*!*/;") inside_rbr_statement = False print(line) convert_rbr_to_pseudo_sbr()          

from Planet MySQL http://ift.tt/1du18ol

MySQL benchmarks on eXFlash DIMMs

http://ift.tt/1zgY36J

In this blog post, we will discuss MySQL performance on eXFlash DIMMs. Earlier we measured the IO performance of these storage devices with sysbench fileio.EnvironmentThe benchmarking environment was the same as the one we did sysbench fileio in.CPU: 2x Intel Xeon E5-2690 (hyper threading enabled) FusionIO driver version: 3.2.6 build 1212 Operating system: CentOS 6.5 Kernel version: 2.6.32-431.el6.x86_64In this case, we used a separate machine for testing which had a 10G ethernet connection to this server. This server executed sysbench. The client was not the bottleneck in this case. The environment is described in greater detail at the end of the blog post.Sysbench OLTP write workloadThe graph shows throughput for sysbench OLTP, we will examine properties only for the dark areas of this graph: which is the read/write case for high concurrency.Each table in the following sections has the following columnscolumnexplanationstorageThe device that was used for the measurement.threadsThe number of sysbench client threads were used in the benchmark.ro_rwRead-only or read-write. In the whitepaper you can find detailed information about read-only data as well.sdThe standard deviation of the metric in question.meanThe mean of the metric in question.95thpctThe 95th percentile of the metric in question (the maximum without the highest 5 percent of the samples).maxThe maximum of the metric in question.Sysbench OLTP throughputstoragethreadsro_rwsdmean95thpctmaxeXFlash DIMM_4128rw714.096055996.51057172.07257674.87eXFlash DIMM_4256rw470.954106162.42716673.02057467.99eXFlash DIMM_8128rw195.578577140.50387493.47807723.13eXFlash DIMM_8256rw173.513736498.14606736.17107490.95fio128rw588.142821855.43042280.27807179.95fio256rw599.885102187.52712584.19957467.13Going from 4 to 8 eXFlash DIMMs will mostly mean more consistent throughput. The mean throughput is significantly higher in case of 8 DIMMs used, but the 95th percentile and the maximum values are not much different (the difference in standard deviation also shows this). The reason they are not much different is that these benchmark are CPU bound (check CPU idle time table later in this post or the graphs in the whitepaper). The PCI-E flash drive on the other hand can do less than half of the throughput of the eXFlash DIMMs (the most relevant is comparing the 95th percentile value).Sysbench OLTP response timestoragethreadsro_rwsdmean95thpctmaxeXFlash DIMM_4128rw4.418778437.93148944.260064.54eXFlash DIMM_4256rw9.664274190.789317109.0450176.45eXFlash DIMM_8128rw2.100408528.79601732.160067.10eXFlash DIMM_8256rw5.593257294.060628101.6300121.92fio128rw51.2343587138.052150203.1160766.11fio256rw72.9901355304.851844392.7660862.00The 95th percentile response time for the eXFlash DIMM’s case are less than 1/4 compared to the PCI-E flash device.CPU idle percentagestoragethreadsro_rwsdmean95thpctmaxeXFlash DIMM_4128rw1.628466743.36838576.260022.18eXFlash DIMM_4256rw1.069800952.29306343.917026.37eXFlash DIMM_8128rw0.429876370.85535431.290015.28eXFlash DIMM_8256rw1.323284354.48617956.71009.40fio128rw4.2115699626.127899431.502055.49fio256rw5.4948985219.312363927.671547.34The percentage of CPU being idle shows that the performance bottleneck in this benchmark was the CPU in case of eXFlash DIMMs (both with 4 and 8 DIMMs, this is why we didn’t see a substantial throughput difference between the 4 and the 8 DIMM setup). However, for the PCI-E flash, the storage device itself was the bottleneck.If you are interested in more details, download the free white paper which contains the full analysis of sysbench OLTP and linkbench benchmarks.The post MySQL benchmarks on eXFlash DIMMs appeared first on MySQL Performance Blog.

from Planet MySQL http://ift.tt/1du18ol

One More Possible Reason for ERROR 1959 (OP000) Invalid role specification

http://ift.tt/1C5MMGY

Mon, 2015-01-26 09:59chris.calenderI ran into this error today while working with ROLES, bundled privileges for a group of users (a nice addition in MariaDB 10.0): ERROR 1959 (OP000) Invalid role specification The manual suggests that this error occurs "if a role that doesn't exist, or to which the user has not been assigned, is specified". However, I ran into a case where the role was created and assigned properly (examine SHOW GRANTS for user in question). In this case, the problem was just due to a bug that affected remote connections on older versions. Thus if you encounter this error, check if you're using a remote connection, and if so, this bug is your problem (assuming you created and assigned the role properly). If interested in reading more about roles, please refer to: http://ift.tt/1C5MMXe Hope this helps.   Tags: Howto About the Author Chris Calender Chris Calender is a Principal Support Engineer with MariaDB. Earlier he was a Principal Support Engineer at MySQL/Sun/Oracle. And before that, he worked as a MySQL DBA and developer for numerous Fortune 500 Companies, including Clear Channel, Western & Southern, and Cincinnati Bell. Chris has both Bachelor's and Master's degrees in Computer Science, from Miami University and the University of Cincinnati, respectively.

from Planet MySQL http://ift.tt/1du18ol