Drupal 7 Users set to blocked in the DB ?

Drupal 7 prevents brute force attacks on accounts. It does so by refusing login attempts when more than 5 attempts failed.
The amount of failed logins is recorded in the table ‘flood’.

By Doing Execute the following query on the Drupal database. To execute this query it will be necessary to login to the database. This is typically done through the command line or through a GUI interface such as phpMyAdmin.

DELETE FROM `flood`;

From the command line, with drush installed, execute the following command:

drush php-eval ‘db_query(“DELETE FROM `flood`”);’

Apache sever Configration on ubuntu

Do Change file site enable folder of etc/apache2/site-enable/
/etc/apache2/sites-enabled/000-default.conf

ServerAdmin webmaster@localhost

DocumentRoot /home/web/public_html

Options FollowSymLinks
AllowOverride None

Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all

ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/

AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all

ErrorLog ${APACHE_LOG_DIR}/error.log

# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn

CustomLog ${APACHE_LOG_DIR}/access.log combined

Alias /doc/ “/usr/share/doc/”

Options Indexes MultiViews FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
Allow from 127.0.0.0/255.0.0.0 ::1/128

Edit this file :-
/etc/apache2/sites-available/*.conf
in site-available

ServerAdmin webmaster@localhost

DocumentRoot /home/web/public_html

Options FollowSymLinks
AllowOverride None

Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all

ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/

AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all

ErrorLog ${APACHE_LOG_DIR}/error.log

# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel warn

CustomLog ${APACHE_LOG_DIR}/access.log combined

Alias /doc/ “/usr/share/doc/”

Options Indexes MultiViews FollowSymLinks
AllowOverride None
Order deny,allow
Deny from all
Allow from 127.0.0.0/255.0.0.0 ::1/128

Introduction to the Features module for Drupal

Using Features to manage configuration
Migrating or synchronizing databases between environments:
Manually updating configuration between environments:-

Writing custom code for all configuration changes:
Drupal components that can be managed with Features

Features supported components
1.Content types: Selecting a content type as a component of a feature will automatically include all the fields associated with that content type and Taxonomy vocabularies referenced by any Term reference field (although in the case of a Term reference field, Features will not include the vocabulary terms).
2.Fields: Although individual fields may be selected as a component of a feature, fields are automatically added as dependencies when creating features with content type components, and are not typically added manually.
3.Image styles: Custom image styles are available as components to be exported as features.
4.Text formats: There is typically not as much configuration involved with text input filters as there is for content types or Views, but it is definitely easier to add any custom text format filter configuration to an easily-managed Features generated module, than it is to remember what text format filter changes you made in your development site that need to be manually configured when you are ready to launch the live version of your site
5.Menus: On large sites, menu entries can become quite extensive and tedious to manage manually.
6.Taxonomies: Although useful for exporting custom vocabularies that are associated to Term reference fields of an exported content type, the taxonomy Features component does not include the terms associated with the exported vocabularies
7.Views: Custom Views are an excellent candidate for a Features module.
8.Roles and permissions: The ability to manage the configuration of roles and permissions on a simple site may not be all that useful. However, once you start modifying a number of permissions and adding more than a couple custom roles to your site, you will quickly come to appreciate the ability to manage this type of configuration with a Features-generated module.

Drupal components supported with additional modules

1.Core Blocks: The Features module by itself does not support core blocks
2.Content: Typically, you wouldn’t want to manage content between environments or different sites with Features. However, there are some cases where there is core content for a site that will be exactly the same between environments.
3.Vocabulary terms: Many times, you will have a vocabulary with terms that are not meant to be manipulated by anyone, but administrative users. Or, you may have a custom view that uses a particular vocabulary term as the value for a term-based filter. These types of vocabularies are excellent candidates for managing as Features-generated modules. This extended Features functionality is provided by the UUID Features Integration module.
4.Variables: Any variables that are in the Drupal variables table can be added as a component to a Features-generated module by installing the Strongarm module (http://drupal.org/project/strongarm).

SCREEN ON Linux

Step 1:-
Create a screen with command: screen -S

step 2:-

screen -ls

To get the screen name’s use command: screen -ls

step 3:-
Go back to screen on terminal :-
screen -r -d
where 13995 is the process ID of the screen session you wish to attach to.

Git Repo and more things to learn

Git start with some basic  to the Advanced Concepts of git :
Set your details :
git config –global user.name “John Doe”
git config –global user.email “john@example.com”

Use –global to set the configuration for all projects. If git config is used without –global and run inside a project directory, the settings are set for the specific project.

See your settings

git config –list

Initialize a git repository for existing code

cd existing-project/
git init

next add new file at your repo with following commend :
->$ git add file_name.txt

Create a branch

git checkout master
git branch new-branch-name

Here master is the starting point for the new branch. Note that with these 2 commands we don’t move to the new branch, as we are still in master and we would need to run git checkout new-branch-name. The same can be achieved using one single command: git checkout -b new-branch-name
Checkout a branch

git checkout new-branch-name

See commit history for just the current branch

git cherry -v master

(master is the branch you want to compare)
Merge branch commits

git checkout master
git merge branch-name

Here we are merging all commits of branch-name to master.
Merge a branch without committing

git merge branch-name –no-commit –no-ff

See differences between the current state and a branch

git diff branch-name

See differences in a file, between the current state and a branch

git diff branch-name path/to/file

Delete a branch

git branch -d new-branch-name

Push the new branch

git push origin new-branch-name

Get all branches

git fetch origin

Get the git root directory

git rev-parse –show-toplevel

Source: http://stackoverflow.com/q/957928/1391963
Remove from repository all locally deleted files

git rm $(git ls-files –deleted)

Source: http://stackoverflow.com/a/5147119/1391963
Delete all untracked files

git clean -f

Including directories:

git clean -f -d

Preventing sudden cardiac arrest:

git clean -n -f -d

Source: http://stackoverflow.com/q/61212/1391963
Show total file size difference between two commits

Short answer: Git does not do that.
Long answer: See http://stackoverflow.com/a/10847242/1391963
Unstage (undo add) files:

git reset HEAD file.txt

See closest tag

git describe –tags `git rev-list –tags –max-count=1`

Source. See also git-describe.
Have git pull running every X seconds, with GNU Screen

screen
for((i=1;i<=10000;i+=1)); do sleep 30 && git pull; done

Use Ctrl+a Ctrl+d to detach the screen.
See previous git commands executed

history | grep git

or

grep ‘^git’  /root/.bash_history

See recently used branches (i.e. branches ordered by most recent commit)

git for-each-ref –sort=-committerdate refs/heads/ | head

Source: http://stackoverflow.com/q/5188320/1391963

Tar project files, excluding .git directory

cd ..
tar cJf project.tar.xz project/ –exclude-vcs

Tar all locally modified files

git diff –name-only | xargs tar -cf project.tar -T –

Look for conflicts in your current files
grep -H -r “<<<” *
grep -H -r “>>>” *
grep -H -r ‘^=======$’ *

There’s also git-grep.
Apply a patch not using git:

patch < file.patch

What is Hadoop Big Data

Apache™ Hadoop® is an open source software project that enables the distributed processing of large data sets across clusters of commodity servers. It is designed to scale up from a single server to thousands of machines, with a very high degree of fault tolerance. Rather than relying on high-end hardware, the resiliency of these clusters comes from the software’s ability to detect and handle failures at the application layer.

Apache Hadoop has two pillars:

YARN – Yet Another Resource Negotiator (YARN) assigns CPU, memory, and storage to applications running on a Hadoop cluster. The first generation of Hadoop could only run MapReduce applications. YARN enables other application frameworks (like Spark) to run on Hadoop as well, which opens up a wealth of possibilities.


HDFS – Hadoop Distributed File System (HDFS) is a file system that spans all the nodes in a Hadoop cluster for data storage. It links together the file systems on many local nodes to make them into one big file system.

Hadoop enables a computing solution that is:
Scalable– New nodes can be added as needed, and added without needing to change data formats, how data is loaded, how jobs are written, or the applications on top.
    Cost effective– Hadoop brings massively parallel computing to commodity servers. The result is a sizeable decrease in the cost per terabyte of storage, which in turn makes it affordable to model all your data.
Flexible– Hadoop is schema-less, and can absorb any type of data, structured or not, from any number of sources. Data from multiple sources can be joined and aggregated in arbitrary ways enabling deeper analyses than any one system can provide.
Fault tolerant– When you lose a node, the system redirects work to another location of the data and continues processing without missing a fright beat.

Extract RAR files on terminal and usages

step 1: install
sudo apt-get install unrar
step 2 :create rar file
unrar e rarfilename.rar /path-to-file-for-rar/
un rar file on terminal .
unrar x filename.rar /
List (l) File Inside Rar Archive
unrar l file.rar

rar a file.rar file

deleted file form rar file as below

rar d filename.rar
set password for rar file on terminal as followed :

rar a -p tecmint.rar