quinta-feira, 10 de abril de 2025

Here is a quick script to generate sp_updateextendedproperty dynamically from your existing metadata.

Here is a quick script to generate sp_updateextendedproperty dynamically from your existing metadata.

DECLARE @TableName SYSNAME, @ColumnName varchar(100), @ColumnDescription varchar(5000), @schemaname varchar(100)
DECLARE @UpdateCursor CURSOR

SET @UpdateCursor = CURSOR FOR
  SELECT
t.name AS [TableName],
c.name AS [ColumnName],
CAST(value AS VARCHAR(5000)) AS ColumnDescription,
SCHEMA_NAME(t.SCHEMA_ID) as SchemaName
  FROM sys.extended_properties AS ep
  INNER JOIN sys.tables AS t ON ep.major_id = t.object_id
  INNER JOIN sys.columns AS c ON ep.major_id = c.object_id AND ep.minor_id = c.column_id
  WHERE
cast(value as varchar(5000)) like 'HOST%'
  order by t.name;

OPEN @UpdateCursor

FETCH NEXT FROM @UpdateCursor INTO @TableName, @ColumnName, @ColumnDescription, @schemaname

WHILE @@FETCH_STATUS = 0
BEGIN

PRINT('EXEC sp_updateextendedproperty
@name = N''MS_Description'', @value = '''+@ColumnDescription+''',
@level0type = N''Schema'', @level0name = '''+@schemaname+''',
@level1type = N''Table'',  @level1name = '''+@TableName+''',
@level2type = N''Column'', @level2name = '''+@ColumnName+'''')

FETCH NEXT FROM @UpdateCursor INTO @TableName, @ColumnName, @ColumnDescription, @schemaname

END

CLOSE @UpdateCursor
DEALLOCATE @UpdateCursor

segunda-feira, 22 de agosto de 2022

List Compressed Tables on SQL SERVER

Here is simple query to list all compressed tables on SQL SERVER:

select distinct t.name AS CompressedTables
from sys.partitions p
inner join sys.tables t
on p.object_id = t.object_id
where p.data_compression > 0

Check out my new book about R Language http://www.amazon.com/dp/B00SX6WA06

terça-feira, 9 de fevereiro de 2021

My experience with AWS Cloud Practitioner certification

I completed on December 21, 2020 the test for the AWS Cloud Practicioner certification successfully.

I had been planning to obtain this certification for a long time, since my work involves AWS resources and knowledge becomes essential.

I already have experience with Microsoft certification tests, but my last certification was in 2012 (yes, a long time ago) and all of my certifications had been carried out until then in Prometric certification centers.

When I started looking for initial information, the first source of information I looked for was AWS itself. As everyone knows, AWS is very professional in its documentation and it couldn't be different when it comes to certification.

The link to the information for this certification is here. A good read in the exam guide is essential. Important information: all AWS tests so far have no translation into Portuguese.

With the knowledge areas mapped, I looked for more resources to study for the exam in a more practical way and found a good reference on the website of the company Digital Cloud Training. Study cheets are very well designed. Akbar Hussein shared good tips for this event on his medium blog.

Alternative text was not provided for this image
With the resources and goal defined, I got two weeks of vacation at work. With covid I couldn't travel, so I decided to take the time to make a study plan and take the exam.

I studied an average of 2.5 hours a day using the resources of Digital Cloud Training and AWS Learning itself.

With the pandemic issue, I chose to take the exam at home, which for me was a new question.

The event had a cost of U $ 100 (with the high dollar this was a challenge too) and I scheduled it a week in advance. Short time, as there were almost no appointments available for the date I wanted. I recommend as much in advance as possible to get the most convenient time / date for you.

A warning to candidates for this test: prepare your environment by removing books and electronics, make your test site as clean as possible. The examiner who will accompany your test will ask you to remove any suspicious objects to prevent fraud and will also ask you to remove watches of any kind.

Alternative text was not provided for this image
On the day of the race, you must make an early check-in at least 15 minutes before the date / time of your race. Using your webcam, I took pictures of my ID and waited for the exam to start. This point is very important, if you are late or fail to meet the examiner's requests, your exam will be canceled, without rights and refund!

Something that bothered me a little was the delay for the exam to start, which took almost 30 minutes. During this period, as I understand it, the documents are validated and the candidate's name is verified with the name of his registration on the website of the certification company Vue.

After 30 minutes, the examiner informed me via the chat software that the exam would start.

The test consists of 65 questions and you have 110 minutes to complete the test. The test environment is very similar to the certifications I performed, such as the possibility to mark questions for review if necessary.

The test was really tough with challenging questions. Every candidate commits to the NDA, so I can't comment on the issues themselves, but I recommend candidates for this exam to pay close attention to these topics:

  • The AWS shared responsibility model
  • Billing models
  • Support plans
  • AWS Security
  • Development framework best practices
I was surprised by the difficulty level of several questions and I believe that even for a person who already works with AWS, it is necessary at least two weeks of study directed to the test.

Anyway, this was my experience. I recommend targeted preparation even if you are already experienced with AWS and definitely have excellent knowledge of technical English.

Good luck to all candidates!

#aws # certification #amazon # motivation

Check out my new book about R Language http://www.amazon.com/dp/B00SX6WA06

Minha experiência com a certificação AWS Cloud Practicioner

Concluí no dia 21 de dezembro de 2020 a prova para a certificação AWS Cloud Practicioner com sucesso.

Já estava planejando obter esta certificação faz um bom tempo, pois minha atuação envolve os recursos da AWS e o conhecimento se torna essencial.

Já tenho experiência com provas de certificação Microsoft, porém a minha última certificação foi em 2012 (sim, faz bastante tempo) e todas as minhas certificações haviam sido realizadas até então em centros de certificação Prometric.

Quando comecei a buscar as informações iniciais a primeira fonte de informação que procurei foi a própria AWS. Como todos sabem, a AWS é muito profissional em sua documentação e não poderia ser diferente quando o assunto é certificação.

O link para as informações desta certificação está aqui. Uma boa lida no guia do exame é essencial. Uma informação importante: todas as provas da AWS até o momento não tem tradução para o português.

Com as áreas de conhecimento mapeadas, busquei mais recursos para estudar para a prova de maneira mais prática e encontrei uma boa referência no site da empresa Digital Cloud Training. Os study cheets são muito bem elaborados. O Akbar Hussein compartilhou boas dicas para esta prova em seu blog no medium.

Não foi fornecido texto alternativo para esta imagem

Com os recursos e meta definidos, consegui duas semanas de férias no trabalho. Com o covid não poderia viajar, então decidi aproveitar o período para fazer um plano de estudo e realizar a prova.

Estudei em média 2,5 horas por dia usando os recursos do Digital Cloud Training e da própria AWS Learning.

Com a questão da pandemia, optei por fazer o exame em casa, o que para mim foi uma questão nova.

A prova teve um custo de U$ 100 (com o dólar alto este foi um desafio também) e agendei com uma semana de antecedência. Pouco tempo, pois quase não haviam agendamentos disponíveis para a data que eu queria. Recomendo o máximo de antecedência possível para conseguir o horário/data mais conveniente pra você.

Um aviso aos candidatos a esta prova: prepare o seu ambiente removendo livros e eletrônicos, deixe o seu local de prova o mais limpo possível. O examinador que acompanhará sua prova solicitará que você remova qualquer objeto suspeito para evitar fraudes e também solicitará que você remova relógios de pulso de qualquer tipo.

Não foi fornecido texto alternativo para esta imagem

No dia da prova você deverá fazer o checkin antecipado com pelo menos 15 minutos de antecedência a data/hora de sua prova. Usando a sua webcam, tirei fotos do meu documento de identificação e aguardei o exame ser iniciado. Este ponto é muito importante, se você se atrasar ou falhar em atender as solicitações do examinador, seu exame será cancelado, sem direito e refund!

Algo que me incomodou um pouco foi a demora para o exame iniciar, que levou quase 30 minutos. Durante este período, pelo que entendi, os documentos são validados e o nome do candidato é verificado com o nome do seu cadastro no site a empresa de certificação a Vue.

Depois dos 30 minutos, o examinador me avisou pelo chat do software de prova que o exame seria iniciado.

A prova é composta de 65 questões e você tem 110 minutos para realização da prova. O ambiente da prova é bem similar as certificações que realizei, como possibilidade de marcar as questões para review se necessário.

A prova foi realmente bem difícil com perguntas desafiadoras. Todo candidato se compromete com o NDA, portanto não posso comentar sobre as questões em si, mas recomendo aos candidatos desta prova muita atenção com estes tópicos:

  • O modelo de responsabilidade compartilhada da AWS
  • Os modelos de cobrança
  • Os planos de suporte
  • Segurança na AWS
  • Boas práticas do framework de desenvolvimento

Me surpreendi com o nível de dificuldade de várias questões e acredito que mesmo para uma pessoa que já trabalha com AWS é necessário pelo menos duas semanas de estudo direcionado para a prova.

Enfim, esta foi a minha experiência. Recomendo uma preparação direcionada mesmo que você já seja experiente com AWS e definitivamente excelentes conhecimentos de inglês técnico.

Boa sorte a todos os candidatos!

#aws #certificação #amazon #motivação

Check out my new book about R Language http://www.amazon.com/dp/B00SX6WA06

terça-feira, 19 de maio de 2020

First and Last Days of the previous week using MySQL

A bit of a challenge here, trying to get the first and last day of the previous week (Sunday and Saturday).

SELECT 
       Date_add(Date_add(Now(), INTERVAL -            Weekday(Now())-1 day),INTERVAL -7 day) 'Sunday', 
       Date_add(Date_add(Date_add(Now(), INTERVAL -    Weekday(Now())-1 day), INTERVAL 6 day),INTERVAL -7 day) 'Saturday';


Check out my new book about R Language http://www.amazon.com/dp/B00SX6WA06

sexta-feira, 7 de fevereiro de 2020

Strange characters on HTML file uploade on Amazon S3

I got some strange characters on a HTML file that was uploaded do S3. I managed to solve that by setting the metadata content type field with the "charset=utf-8" instruction.



Check out my new book about R Language http://www.amazon.com/dp/B00SX6WA06

sexta-feira, 24 de janeiro de 2020

Pentaho - statement did not return a result set

Using a TABLE INPUT with a Microsoft Sql Server procedure, I started getting this error message:

"statement did not return a result set"

After some googling and misleading forum threads, I found that the Java Driver was getting confused by the rowcount.

To solve this you need to add SET NOCOUNT ON; at the begining of your procedure.



Check out my new book about R Language http://www.amazon.com/dp/B00SX6WA06

terça-feira, 17 de dezembro de 2019

[From Technet] Improvements to machine learning capabilities in SQL Server 2019

Many organizations seek to do more with their data than pump out dashboards and reports. Applying advanced analytical approaches such as machine learning is an essential arena of knowledge for any data professional. While database administrators (DBAs) don't necessarily have to become data scientists, they should have a deep understanding of the machine learning technologies at their disposal and how to use them in collaboration with other domain experts.

For those of us who work with SQL Server, there are many cool new capabilities to get familiar with in SQL Server 2019. At the heart of it all is a solution called Big Data Clusters, allowing you to create scalable clusters of SQL Server, Apache Spark, and HDFS containers running on Kubernetes.

That means flexibility in the ways you access the data and relational data side-by-side. Through the cluster, you can query data from external sources. You can also store big data in HDFS managed by SQL Server. At the end of the day, this makes more of your data available, faster and more easily, for machine learning, artificial intelligence, and other advanced analytical tasks.

SQL Server 2019 also provides expanded machine learning capabilities built in. It adds commonly requested features related to the use of R and Python for machine learning. For example, SQL Server 2019 enables SQL Server Machine Learning Services to be installed on Linux. Failover clusters are supported for greater reliability, and new and improved scripting capabilities open new options for generating and enhancing models.

Integration of Python with the SQL server database engine enables you to perform advanced machine learning tasks close to the data rather than moving it around. Insights generated from the Python runtime can be accessed by production applications using standard SQL Server data access methods.

With the addition of partition-based modeling, you can train many small models instead of one large model when using partitioned data. If you have data that breaks out easily using categories such as demographics or regions, partitioning enables you to get more granular with your models without having to break the dataset apart.

As the line between DBA and data scientist continues to blur, most of us will be expected to understand and manage these types of solutions. Microsoft clearly recognizes the importance of machine learning and the need to apply it more easily across different data typeswhile maintaining the performance and manageability benefits of using SQL Server.

To learn more about what you can do with Microsoft SQL 19, check out the free Packt guide Introducing Microsoft SQL 19. If you're ready to jump to a fully managed cloud solution, check out the Essential Guide to Data in the Cloud.

The post Improvements to machine learning capabilities in SQL Server 2019 appeared first on SQL Server Blog.



from SQL Server Blog https://ift.tt/2r2Oewv

quarta-feira, 11 de dezembro de 2019

[From Technet] Tools and commands for running SQL Server 2019 on Linux

Organizations that embraced the option to run Microsoft SQL Server 2017 on Linux have been looking forward to the release of SQL Server 2019. Regardless of which operating system (OS) you choose, it's the same SQL Server database code, and includes even more of the same features and services as the Windows release. This introductory blog post about running Microsoft SQL Server 2019 on Linux provides basic information database professionals need to know before upgrading or migrating SQL Server onto Linux.

Supported Linux platforms

Microsoft SQL Server 2019 is tested and supported to run on several Linux distribution platforms:

  • Red Hat Enterprise Linux (RHEL)
  • SUSE Linux Enterprise Server (SLES)
  • Ubuntu

Along with the above versions of Linux distributions, SQL Server 2019 is supported in a container scenario using a Docker image. Running a SQL Server database inside a Docker engine with Linux offers more flexibility, faster recovery, and quicker deployments, including deployments into the Azure cloud. For those becoming familiar with Linux, Docker for Windows or Mac gives you the option to run a Docker engine on your workstation with SQL Server 2019 on Linux.

Along with Docker technology, orchestration can be achieved, both managing and deploying SQL Server containers on Linux using Red Hat Open shift or Kubernetes. This includes SQL Server 2019 Big Data Clusters (BDC), fully scalable clusters with SQL Server, Spark, and Hadoop File System (HDFS). BDCs provide the ability to read, write, and analyze big data with T-SQL or Spark, and you can combine big data and relational data, too.

While this post has focused on RHEL, SLES, Ubuntu, and container options for Linux, you may have questions regarding other Linux distributions that may be able to run SQL Server 2019, but they're not supported by Microsoft. Always consider the database server's use before deciding to run SQL Server 2019 on an unsupported operating system and refer to Microsoft's support policy on the topic.

Tools for Microsoft SQL Server 2019 running on Linux

With the release of SQL Server 2019, there are more tools that database professionals can use with SQL Server 2019 running on Linux:

  • Windows-based toolsYou can use any existing tools that run on Windows to access SQL Server on Linux. This includes SQL Server Management Studio (SSMS), SQL Server Data Tools (SSDT), operational support systems, and third-party tools.
  • Activity MonitorUse SSMS on Windows to connect remotely and use tools/features such as Activity Monitor commands on a Linux host.
  • Azure Data StudioUse this cross-platform database tool to manage the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux. That includes SQL Server 2019 running on Linux. You can also create SQL Server Notebooks in Azure Data Studio, multiple team members to collaborate on projects.
  • Dynamic Management Views (DMVs)System DMVs collect different types of information about SQL Server, including Linux process information. Use server state information in conjunction with dynamic management functions (DMFs) to monitor the health of a server instance, diagnose problems, and tune performance.
  • Query StoreUse Query Store for insights on query plan choice and performance. It simplifies performance troubleshooting by helping database professionals quickly find performance differences caused by query plan changes.
  • Performance DashboardThe Performance Dashboard helps you determine if there's a performance bottleneck on the connected SQL Server instance and provides diagnostic data to help you resolve the problem.
  • mssql-conf is a configuration script that installs with SQL Server 2019 for Linux.

Command Line Tools for Microsoft SQL Server 2019 on Linux

  • sqlcmd and Bulk Copy Protocol (BCP)These command-line tools are natively available on Linux. sqlcmd is a command-line query utility, and BCP is a bulk import-export utility.
  • mssql-scripterThis command-line tool on Linux generates T-SQL scripts for a SQL database running anywhere.
  • mssql-cliThis new, interactive command-line tool written in Python offers several enhancements over sqlcmd in the Terminal environment.

With these tools and command line tools, you're now ready to run Microsoft SQL Server 2019 on Linux. To learn more about what you can do with Microsoft SQL Server 2019, check out the free Packt guide Introducing Microsoft SQL 19. If you're ready to jump to a fully managed cloud solution, check out the Essential Guide to Data in the Cloud.

The post Tools and commands for running SQL Server 2019 on Linux appeared first on SQL Server Blog.



from SQL Server Blog https://ift.tt/2Px0XzE

quinta-feira, 5 de dezembro de 2019

[From Technet] Easier management of PolyBase and relational data through SQL Server 2019

The days when a database administrator (DBA) could specialize solely in a single database technology are rapidly ending. Today, we're much more likely than ever before to be asked to bring together many types of data from diverse sources. Although specialization still has its place, having the knowledge and tools at our disposal to cross those boundaries makes us much more useful.

That's one reason to get excited about the continued expansion of the PolyBase technology introduced in SQL Server 2016, which has become much more powerful in the release of SQL Server 2019.

Before PolyBase, when trying to use external data sources from SQL Server, you either had to transfer data from one source to another or query both sources and then write custom logic to joining and integrate the data at the client level. PolyBase simplifies the process of reading from external data sources. It does so by enabling your SQL Server instance to process Transact-SQL (T-SQL) queries that access both external data and relational data inside the instance.

Initially, PolyBase targeted Apache Hadoop and Azure Blob Storage. The ability to target big data inside Hadoop nodes expanded the ability to do modern analytics seamlessly from a SQL Server platform. No additional software needs to be installed in the Hadoop environment, and you don't even need to know Hadoop intimately to query it. PolyBase even pushed some computations to the Hadoop node to optimize queries.

What's awesome about PolyBase in 2019 is that Microsoft has added connectors that allow you to query a much wider range of external data. PolyBase now works with Oracle, Teradata, MongoDB, and any database that conforms to Open Database Connectivity(ODBC) standards. You can query the data where it lives without having to bring it into SQL Server. Best of all, it provides distributed, scalable query performance.

PolyBase can also simplify things when you do have to move data from one place to another. For example, you can bring data from Hadoop, Azure Blob Storage, or Azure Data Lake Store into relational tables. This allows you to take advantage of the fast analysis and columnstore capabilities of SQL Server without the need for a separate extract, transform, load (ETL) or import tool. On the other hand, it's also easy to export data from SQL Server to cost-effective online storage. Features like PolyBase make it much easier for database administrators to help organizations get maximum value from all their data, while optimizing the ratio of cost to performance.

To learn more about what you can do with Microsoft SQL Server 2019, check out the free Packt guide Introducing Microsoft SQL Server 2019. If you're ready to jump to a fully managed cloud solution, check out the Essential Guide to Data in the Cloud.

The post Easier management of PolyBase and relational data through SQL Server 2019 appeared first on SQL Server Blog.



from SQL Server Blog https://ift.tt/2RngVyQ