Dealing with core Cache in Drupal 8

At this point, if you try to login in the Drupal 8 website you will be rejected, it’s because the login system doesn’t read directly the table users_field_data instead of a cache for entities is used.

To flush the cache for a specific user entity with compromise the rest of cache of your system you can use the following SQL statement.

DELETE FROM cache_entity WHERE cid = ‘values:user:1’;

Now you can grab a cup of coffee/tea and enjoy your Drupal 8 website.

Hope this will help you.

 

Resetting the administrator password with SQL-query in Drupal 8

It’s has been happening, that we tend to forget password on local as well for the live site. When it’s come to Drupal 8 all things has things changed from the prior version of Drupal.

Here we go with some of the tricks for Drupal 8.

The Solution
Generate a new password
First, you have to generate a password hash that is valid for your site.

Execute the following commands from the command line, in the Drupal 8 root directory:

$ php core/scripts/password-hash.sh ‘your-new-pass-here’

password: your-new-pass-here    hash: $S$EV4QAYSIc9XNZD9GMNDwMpMJXPJzz1J2dkSH6KIGiAVXvREBy.9E

Update the user password.

Now you need to update the user password, in our case, we need to update the Administrator password, fortunately, the UID for Administrator is 1 equal to previous versions of Drupal.

With the new password, we need run the following SQL statement.

UPDATE users_field_data SET pass=’$S$E5j59pCS9kjQ8P/M1aUCKuF4UUIp.dXjrHyvnE4PerAVJ93bIu4U’ WHERE uid = 1;

We all set with password update and go… log in!

Vim Plugin for Drupal on Linux

Indentation

The following commands will indent your code the right amount, using spaces rather than tabs and automatically indent after you start. The commands should be added to a .vimrc file in your home directory (~/.vimrc), you may need to create this.
set expandtab
set tabstop=2
set shiftwidth=2
set autoindent
set smartindent

Syntax highlighting

If you enjoy syntax highlighting, it may be worth remembering that many of Drupal’s PHP files are *.module or *.inc, among others.

Vim seems to syntax highlight *.inc files properly by default but doesn’t know that some other files are PHP content. For *.module and *.install, use this snippet in .vimrc:
if has(“autocmd”)
” Drupal *.module and *.install files.
augroup module
autocmd BufRead,BufNewFile *.module set filetype=php
autocmd BufRead,BufNewFile *.install set filetype=php
autocmd BufRead,BufNewFile *.test set filetype=php
autocmd BufRead,BufNewFile *.inc set filetype=php
autocmd BufRead,BufNewFile *.profile set filetype=php
autocmd BufRead,BufNewFile *.view set filetype=php
augroup END
endif
syntax on

more can be done its all open source.. it’s upto you 🙂

Getting Rid Of Magento ReIndexing Errors

If even after trying multiple times Magento Indexer fails to respond or keep throwing same errors you can take the following steps resolve Magento ReIndexing errors.

1. Locate var/locks directory and remove all files under this directory. This will clear all the locks for re-indexing to take place again.

2. Now, login to your MysQSL/phpMyAdmin to run the following MySQL query (Ensure that your have taken full backup before committing this MySQL que

DELETE cpop.* FROM catalog_product_option_price AS cpop
INNER JOIN catalog_product_option AS cpo
ON cpo.option_id = cpop.option_id
WHERE
cpo.type = ‘checkbox’ OR
cpo.type = ‘radio’ OR
cpo.type = ‘drop_down’;

DELETE cpotp.* FROM catalog_product_option_type_price AS cpotp
INNER JOIN catalog_product_option_type_value AS cpotv
ON cpotv.option_type_id = cpotp.option_type_id
INNER JOIN catalog_product_option AS cpo
ON cpotv.option_id = cpo.option_id
WHERE

*hope above will helps 🙂

Install Postgresql On Ubuntu

Start with Installing PostgreSql
sudo apt-get install postgresql postgresql-contrib

next to do : Login with admin user    sudo -u postgres psql

\du : to check user available in postgressql

Than change the user Password with Alter user commend .

ALTER USER postgres WITH PASSWORD ‘newpassword’;
you can add new user with commend :
createuser [connection-option…] [option…] [username]
Like wise you can add user and change there attributes and Password too.
To exit form PostgresSQL use this commend  postgres=#\q ;

MySQL command prompt auto-complete

It’s Good Trick

Naveen S Nayak's Blog

If you work on the mysql command line, its nice to have an autocomplete similar to the linux bash where you press the TAB key and the commands complete.

In mysql, we can set it up to a degree to provide some hints when we type. Though it might not complete everything you type, it does complete table names etc.

2 ways to do it ( I am on CentOS 6.5 )

when you log into mysql, use the auto-rehash config option like

mysql --auto-rehash -u root -p

if this does not work, then try creating a file called .my.cnf in your home directory and put the below into it

[mysql]
auto-rehash

View original post

What is Hadoop Big Data

Apache™ Hadoop® is an open source software project that enables the distributed processing of large data sets across clusters of commodity servers. It is designed to scale up from a single server to thousands of machines, with a very high degree of fault tolerance. Rather than relying on high-end hardware, the resiliency of these clusters comes from the software’s ability to detect and handle failures at the application layer.

Apache Hadoop has two pillars:

YARN – Yet Another Resource Negotiator (YARN) assigns CPU, memory, and storage to applications running on a Hadoop cluster. The first generation of Hadoop could only run MapReduce applications. YARN enables other application frameworks (like Spark) to run on Hadoop as well, which opens up a wealth of possibilities.


HDFS – Hadoop Distributed File System (HDFS) is a file system that spans all the nodes in a Hadoop cluster for data storage. It links together the file systems on many local nodes to make them into one big file system.

Hadoop enables a computing solution that is:
Scalable– New nodes can be added as needed, and added without needing to change data formats, how data is loaded, how jobs are written, or the applications on top.
    Cost effective– Hadoop brings massively parallel computing to commodity servers. The result is a sizeable decrease in the cost per terabyte of storage, which in turn makes it affordable to model all your data.
Flexible– Hadoop is schema-less, and can absorb any type of data, structured or not, from any number of sources. Data from multiple sources can be joined and aggregated in arbitrary ways enabling deeper analyses than any one system can provide.
Fault tolerant– When you lose a node, the system redirects work to another location of the data and continues processing without missing a fright beat.