Manage set of tracked repositories
Manage the set of repositories ("remotes") whose branches you track
SUBCOMMANDS
subcommand | Designation |
---|---|
ADD | Adds a remote named <name> for the repository at <url> |
RENAME | REMOVE THE REMOTE NAMED <NAME> |
RM | Remove the remote named <name> |
SET-HEAD | Sets or deletes the default branch for the named remote |
SET-BRANCHES | Changes the list of branches tracked by the named remote |
GET-URL | Retrieves the URLs for a remote. Configurations for insteadOf and pushInsteadOf are expanded here |
SET-URL | Changes URLs for the remote. Sets first URL for remote <name> to <newurl> |
SHOW | Gives some information about the remote <name> |
PRUNE | Deletes stale references associated with <name> |
UPDATE | Fetch updates for remotes or remote groups in the repository as defined by remotes.<group> |
EXAMPLES | Fetch updates for remotes or remote groups in the repository as defined by remotes.<group> |
ADD
Adds a remote named <name> for the repository at <url>. The command git fetch <name> can then be used to create and update remote-tracking branches <name>/<branch>
git remote add [-t <branch>] [-m <master>] [-f] [--[no-]tags] [--mirror=<fetch|push>] <name> <url>
-f # git fetch <name> is run immediately after the remote information is set up
--tags # git fetch <name> imports every tag from the remote repository
--no-tags # git fetch <name> does not import tags from the remote repository. By default, only tags on fetched branches are imported
-t <branch> # instead of the default glob refspec for the remote to track all branches under the refs/remotes/<name>/ namespace, a refspec to track only <branch> is created. You can give more than one -t <branch> to track
-m <master> # a symbolic-ref refs/remotes/<name>/HEAD is set up to point at remote’s <master> branch
RENAME
Rename the remote named <old> to <new>. All remote-tracking branches and configuration settings for the remote are updated
In case <old> and <new> are the same, and <old> is a file under $GIT_DIR/remotes or $GIT_DIR/branches, the remote is converted to the configuration file format
git remote rename <old> <new>
RM
Remove the remote named <name>. All remote-tracking branches and configuration settings for the remote are removed
git remote remove <name>
SET-HEAD
Sets or deletes the default branch (i.e. the target of the symbolic-ref refs/remotes/<name>/HEAD) for the named remote. Having a default branch for a remote is not required, but allows the name of the remote to be specified in lieu of a specific branch. For example, if the default branch for origin is set to master, then origin may be specified wherever you would normally specify origin/master
git remote set-head <name> (-a | --auto | -d | --delete | <branch>)
-d, --delete # the symbolic ref refs/remotes/<name>/HEAD is deleted
-a, --auto # the remote is queried to determine its HEAD, then the symbolic-ref refs/remotes/<name>/HEAD is set to the same branch. e.g., if the remote HEAD is pointed at next, "git remote set-head origin -a" will set the
symbolic-ref refs/remotes/origin/HEAD to refs/remotes/origin/next. This will only work if refs/remotes/origin/next already exists; if not it must be fetched first
Use <branch> to set the symbolic-ref refs/remotes/<name>/HEAD explicitly. e.g., "git remote set-head origin master" will set the symbolic-ref refs/remotes/origin/HEAD to refs/remotes/origin/master. This will only work if
refs/remotes/origin/master already exists; if not it must be fetched first
SET-BRANCHES
Changes the list of branches tracked by the named remote. This can be used to track a subset of the available remote branches after the initial setup for a remote
The named branches will be interpreted as if specified with the -t option on the git remote add command line
git remote set-branches [--add] <name> <branch>...
--add # instead of replacing the list of currently tracked branches, adds to that list
GET-URL
Retrieves the URLs for a remote. Configurations for insteadOf and pushInsteadOf are expanded here. By default, only the first URL is listed
git remote get-url [--push] [--all] <name>
--push # push URLs are queried rather than fetch URLs
--all # all URLs for the remote will be listed
SET-URL
Changes URLs for the remote. Sets first URL for remote <name> that matches regex <oldurl> (first URL if no <oldurl> is given) to <newurl>. If <oldurl> doesn’t match any URL, an error occurs and nothing is changed
Note that the push URL and the fetch URL, even though they can be set differently, must still refer to the same place. What you pushed to the push URL should be what you would see if you immediately fetched from the fetch URL. If
you are trying to fetch from one place (e.g. your upstream) and push to another (e.g. your publishing repository), use two separate remotes
git remote set-url [--push] <name> <newurl> [<oldurl>]
git remote set-url --add [--push] <name> <newurl>
git remote set-url --delete [--push] <name> <url>
--push # push URLs are manipulated instead of fetch URLs
--add # instead of changing existing URLs, new URL is added
--delete # instead of changing existing URLs, all URLs matching regex <url> are deleted for remote <name>. Trying to delete all non-push URLs is an error
SHOW
Gives some information about the remote <name>
git remote [-v | --verbose] show [-n] <name>...
-n # the remote heads are not queried first with git ls-remote <name>; cached information is used instead
PRUNE
Deletes stale references associated with <name>. By default, stale remote-tracking branches under <name> are deleted, but depending on global configuration and the configuration of the remote we might even prune local tags that
haven’t been pushed there. Equivalent to git fetch --prune <name>, except that no new references will be fetched
git remote prune [-n | --dry-run] <name>...
--dry-run # report what branches will be pruned, but do not actually prune them
UPDATE
Fetch updates for remotes or remote groups in the repository as defined by remotes.<group>. If neither group nor remote is specified on the command line, the configuration parameter remotes.default will be used; if remotes.default is not defined, all remotes which do not have the configuration parameter remote.<name>.skipDefaultUpdate set to true will be updated
git remote [-v | --verbose] update [-p | --prune] [(<group> | <remote>)...]
--prune # run pruning against all the remotes that are updated
EXAMPLES
Add a new remote, fetch, and check out a branch from it
$ git remote
origin
$ git branch -r
origin/HEAD -> origin/master
origin/master
$ git remote add staging git://git.kernel.org/.../gregkh/staging.git
$ git remote
origin
staging
$ git fetch staging
...
From git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging
* [new branch] master -> staging/master
* [new branch] staging-linus -> staging/staging-linus
* [new branch] staging-next -> staging/staging-next
$ git branch -r
origin/HEAD -> origin/master
origin/master
staging/master
staging/staging-linus
staging/staging-next
$ git switch -c staging staging/master
...
Imitate git clone but track only selected branches
$ mkdir project.git
$ cd project.git
$ git init
$ git remote add -f -t master -m master origin git://example.com/git.git/
$ git merge origin
Shows the commit logs
The command takes options applicable to the git rev-list command to control what is shown and how, and options applicable to the git diff-* commands to control how the changes each commit introduces are shown
OPTIONS
git log [<options>] [<revision range>] [[--] <path>...]
--follow # Continue listing the history of a file beyond renames (works only for a single file)
--no-decorate, --decorate[=short|full|auto|no] # Print out the ref names of any commits that are shown
--decorate-refs=<pattern>, --decorate-refs-exclude=<pattern> # If no --decorate-refs is given, pretend as if all refs were included
--source # Print out the ref name given on the command line by which each commit was reached
--[no-]use-mailmap # Use mailmap file to map author and committer names and email addresses to canonical real names and email addresses
--full-diff # Without this flag, git log -p <path>
Note that this affects all diff-based output types, e
--log-size # Include a line “log size <number>” in the output for each commit, where <number> is the length of that commit’s message in bytes
-L <start>,<end>:<file>, -L :<funcname>:<file> # Trace the evolution of the line range given by "<start>,<end>" (or the function name regex <funcname>) within the <file>
<start> and <end> can take one of these forms:
- number # If <start> or <end> is a number, it specifies an absolute line number (lines count from 1)
- /regex/ # This form will use the first line matching the given POSIX regex
- +offset or -offset # This is only valid for <end> and will specify a number of lines before or after the line given by <start>
<revision range> # Show only commits in the specified revision range
[--] <path>... # Show only commits that are enough to explain how the files that match the specified paths came to be. See History Simplification below for details and other simplification modes
-<number>, -n <number>, --max-count=<number> # Limit the number of commits to output
--skip=<number> # Skip number commits before starting to show the commit output
--since=<date>, --after=<date> # Show commits more recent than a specific date
--until=<date>, --before=<date> # Show commits older than a specific date
--author=<pattern>, --committer=<pattern> # Limit the commits output to ones with author/committer header lines that match the specified pattern (regular expression)
--grep-reflog=<pattern> # Limit the commits output to ones with reflog entries that match the specified pattern (regular expression)
--grep=<pattern> # Limit the commits output to ones with log message that matches the specified pattern (regular expression)
--all-match # Limit the commits output to ones that match all given --grep, instead of ones that match at least one
--invert-grep # Limit the commits output to ones with log message that do not match the pattern specified with --grep=<pattern>
-i, --regexp-ignore-case # Match the regular expression limiting patterns without regard to letter case
--basic-regexp # Consider the limiting patterns to be basic regular expressions; this is the default
-E, --extended-regexp # Consider the limiting patterns to be extended regular expressions instead of the default basic regular expressions
-F, --fixed-strings # Consider the limiting patterns to be fixed strings (don’t interpret pattern as a regular expression)
-P, --perl-regexp # Consider the limiting patterns to be Perl-compatible regular expressions
--remove-empty # Stop when a given path disappears from the tree
--merges # Print only merge commits
--no-merges # Do not print commits with more than one parent
--min-parents=<number>, --max-parents=<number>, --no-min-parents, --no-max-parents # Show only commits which have at least (or at most) that many parent commits
--first-parent # Follow only the first parent commit upon seeing a merge commit
--not # Reverses the meaning of the ^ prefix (or lack thereof) for all following revision specifiers, up to the next --not
--all # Pretend as if all the refs in refs/, along with HEAD, are listed on the command line as <commit>
--branches[=<pattern>] # Pretend as if all the refs in refs/heads are listed on the command line as <commit>
--tags[=<pattern>] # Pretend as if all the refs in refs/tags are listed on the command line as <commit>
--remotes[=<pattern>] # Pretend as if all the refs in refs/remotes are listed on the command line as <commit>
--glob=<glob-pattern> # Pretend as if all the refs matching shell glob <glob-pattern> are listed on the command line as <commit>
--exclude=<glob-pattern> # Do not include refs matching <glob-pattern> that the next --all, --branches, --tags, --remotes, or --glob would otherwise consider
--reflog # Pretend as if all objects mentioned by reflogs are listed on the command line as <commit>
--alternate-refs # Pretend as if all objects mentioned as ref tips of alternate repositories were listed on the command line
--single-worktree # By default, all working trees will be examined by the following options when there are more than one (see git-worktree(1)): --all, --reflog and --indexed-objects
--ignore-missing # Upon seeing an invalid object name in the input, pretend as if the bad input was not given
--bisect # Pretend as if the bad bisection ref refs/bisect/bad was listed and as if it was followed by --not and the good bisection refs refs/bisect/good-* on the command line
--stdin # In addition to the <commit> listed on the command line, read them from the standard input
--cherry-mark # Like --cherry-pick (see below) but mark equivalent commits with = rather than omitting them, and inequivalent ones with +
--cherry-pick # Omit any commit that introduces the same change as another commit on the “other side” when the set of commits are limited with symmetric difference
--left-only, --right-only # List only commits on the respective side of a symmetric difference, i
--cherry # A synonym for --right-only --cherry-mark --no-merges; useful to limit the output to the commits on our side and mark those that have been applied to the other side of a forked history with git log --cherry upstream
-g, --walk-reflogs # Instead of walking the commit ancestry chain, walk reflog entries from the most recent one to older ones
--merge # After a failed merge, show refs that touch files having a conflict and don’t exist on all heads to merge
--boundary # Output excluded boundary commits
<paths> # Commits modifying the given <paths> are selected
--simplify-by-decoration # Commits that are referred by some branch or tag are selected
--full-history # Same as the default mode, but does not prune some history
--dense # Only the selected commits are shown, plus some to have a meaningful history
--sparse # All commits in the simplified history are shown
--simplify-merges # Additional option to --full-history to remove some needless merges from the resulting history, as there are no selected commits contributing to this merge
--ancestry-path # When given a range of commits to display (e
--full-history without parent rewriting # This mode differs from the default in one point: always follow all parents of a merge, even if it is TREESAME to one of them
--full-history with parent rewriting # Ordinary commits are only included if they are !TREESAME (though this can be changed, see --sparse below)
--dense # Commits that are walked are included if they are not TREESAME to any parent
--sparse # All commits that are walked are included
--simplify-merges # First, build a history graph in the same way that --full-history with parent rewriting does (see above)
--ancestry-path # Limit the displayed commits to those directly on the ancestry chain between the “from” and “to” commits in the given commit range
--date-order # Show no parents before all of its children are shown, but otherwise show commits in the commit timestamp order
--author-date-order # Show no parents before all of its children are shown, but otherwise show commits in the author timestamp order
--topo-order # Show no parents before all of its children are shown, and avoid showing commits on multiple lines of history intermixed
--reverse # Output the commits chosen to be shown (see Commit Limiting section above) in reverse order
--no-walk[=(sorted|unsorted)] # Only show the given commits, but do not traverse their ancestors
--do-walk # Overrides a previous --no-walk
Commit Formatting
--pretty[=<format>], --format=<format> # Pretty-print the contents of the commit logs in a given format, where <format> can be one of oneline, short, medium, full, fuller, reference, email, raw, format:<string> and tformat:<string>
--abbrev-commit # Instead of showing the full 40-byte hexadecimal commit object name, show only a partial prefix
--no-abbrev-commit # Show the full 40-byte hexadecimal commit object name
--oneline # This is a shorthand for "--pretty=oneline --abbrev-commit" used together
--encoding=<encoding> # The commit objects record the encoding used for the log message in their encoding header; this option can be used to tell the command to re-code the commit log message in the encoding preferred by the user
--expand-tabs=<n>, --expand-tabs, --no-expand-tabs # Perform a tab expansion (replace each tab with enough spaces to fill to the next display column that is multiple of <n>) in the log message before showing it in the output
--notes[=<ref>] # Show the notes (see git-notes(1)) that annotate the commit, when showing the commit log message
--no-notes # Do not show notes
--show-notes[=<ref>], --[no-]standard-notes # These options are deprecated
--show-signature # Check the validity of a signed commit object by passing the signature to gpg --verify and show the output
--relative-date # Synonym for --date=relative
--date=<format> # Only takes effect for dates shown in human-readable format, such as when using --pretty
--parents # Print also the parents of the commit (in the form "commit parent
--children # Print also the children of the commit (in the form "commit child
--left-right # Mark which side of a symmetric difference a commit is reachable from
--graph # Draw a text-based graphical representation of the commit history on the left hand side of the output
--show-linear-break[=<barrier>] # When --graph is not used, all history branches are flattened which can make it hard to see that the two consecutive commits do not belong to a linear branch
Diff Formatting
Listed below are options that control the formatting of diff output
-c # With this option, diff output for a merge commit shows the differences from each of the parents to the merge result simultaneously instead of showing pairwise diff between a parent and the result one at a time
--cc # This flag implies the -c option and further compresses the patch output by omitting uninteresting hunks whose contents in the parents have only two variants and the merge result picks one of them without modification
--combined-all-paths # This flag causes combined diffs (used for merge commits) to list the name of the file from all parents
-m # This flag makes the merge commits show the full diff like regular commits; for each merge parent, a separate log entry and diff is generated
-r # Show recursive diffs
-t # Show the tree objects in the diff output
PRETTY FORMATS
If the commit is a merge, and if the pretty-format is not oneline, email or raw, an additional line is inserted before the Author: line. This line begins with "Merge: " and the hashes of ancestral commits are printed, separated by spaces. Note that the listed commits may not necessarily be the list of the direct parent commits if you have limited your view of history: for example, if you are only interested in changes related to a certain directory or file
There are several built-in formats, and you can define additional formats by setting a pretty.<name> config option to either another format name, or a format: string, as described below (see git-config(1))
Here are the details of the built-in formats:
- oneline
- short
- medium
- full
- fuller
- reference
- raw
- format:<string> # see man for details
- tformat:<string> # see man for details
COMMON DIFF OPTIONS
-p, -u, --patch # Generate patch (see section on generating patches)
-s, --no-patch # Suppress diff output
-U<n>, --unified=<n> # Generate diffs with <n> lines of context instead of the usual three
--output=<file> # Output to a specific file instead of stdout
--output-indicator-new=<char>, --output-indicator-old=<char>, --output-indicator-context=<char> # Specify the character used to indicate new, old or context lines in the generated patch
--raw # For each commit, show a summary of changes using the raw diff format
--patch-with-raw # Synonym for -p --raw
--indent-heuristic # Enable the heuristic that shifts diff hunk boundaries to make patches easier to read
--no-indent-heuristic # Disable the indent heuristic
--minimal # Spend extra time to make sure the smallest possible diff is produced
--patience # Generate a diff using the "patience diff" algorithm
--histogram # Generate a diff using the "histogram diff" algorithm
--anchored=<text> # Generate a diff using the "anchored diff" algorithm
--diff-algorithm={patience|minimal|histogram|myers} # Choose a diff algorithm
The variants are as follows:
default, myers # The basic greedy diff algorithm
minimal # Spend extra time to make sure the smallest possible diff is produced
patience # Use "patience diff" algorithm when generating patches
histogram # This algorithm extends the patience algorithm to "support low-occurrence common elements"
--stat[=<width>[,<name-width>[,<count>]]] # Generate a diffstat
--compact-summary # Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat
--numstat # Similar to --stat, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly
--shortstat # Output only the last line of the --stat format containing total number of modified files, as well as number of added and deleted lines
-X[<param1,param2,
The following parameters are available:
changes # Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination
lines # Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts
files # Compute the dirstat numbers by counting the number of files changed
cumulative # Count changes in a child directory for the parent directory as well
<limit> # An integer parameter specifies a cut-off percent (3% bygit log [<options>] [<revision range>] [[--] <path>...] default)
Example: The following will count changed files, while ignoring directories with less than 10% of the total amount of changed files, and accumulating child directory counts in the parent directories: --dirstat=files,10,cumulative
--cumulative # Synonym for --dirstat=cumulative
--summary # Output a condensed summary of extended header information such as creations, renames and mode changes
--patch-with-stat # Synonym for -p --stat
-z # Separate the commits with NULs instead of with new newlines
--name-only # Show only names of changed files
--name-status # Show only names and status of changed files
--submodule[=<format>] # Specify how differences in submodules are shown
--color[=<when>] # Show colored diff
--no-color # Turn off colored diff
--color-moved[=<mode>] # Moved lines of code are colored differently
The mode must be one of:
no # Moved lines are not highlighted
default # Is a synonym for zebra
plain # Any line that is added in one location and was removed in another location will be colored with color
blocks # Blocks of moved text of at least 20 alphanumeric characters are detected greedily
zebra # Blocks of moved text are detected as in blocks mode
dimmed-zebra # Similar to zebra, but additional dimming of uninteresting parts of moved code is performed
--no-color-moved # Turn off move detection
--color-moved-ws=<modes> # This configures how whitespace is ignored when performing the move detection for --color-moved
These modes can be given as a comma separated list:
no # Do not ignore whitespace when performing move detection
ignore-space-at-eol # Ignore changes in whitespace at EOL
ignore-space-change # Ignore changes in amount of whitespace
ignore-all-space # Ignore whitespace when comparing lines
allow-indentation-change # Initially ignore any whitespace in the move detection, then group the moved code blocks only into a block if the change in whitespace is the same per line
--no-color-moved-ws # Do not ignore whitespace when performing move detection
--word-diff[=<mode>] # Show a word diff, using the <mode> to delimit changed words
The <mode> defaults to plain, and must be one of:
color # Highlight changed words using only colors
plain # Show words as [-removed-] and {+added+}
porcelain # Use a special line-based format intended for script consumption
none # Disable word diff again
--word-diff-regex=<regex> # Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word
--color-words[=<regex>] # Equivalent to --word-diff=color plus (if a regex was specified) --word-diff-regex=<regex>
--no-renames # Turn off rename detection, even when the configuration file gives the default to do so
--[no-]rename-empty # Whether to use empty blobs as rename source
--check # Warn if changes introduce conflict markers or whitespace errors
--ws-error-highlight=<kind> # Highlight whitespace errors in the context, old or new lines of the diff
--full-index # Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output
--binary # In addition to --full-index, output a binary diff that can be applied with git-apply
--abbrev[=<n>] # Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show only a partial prefix
-B[<n>][/<m>], --break-rewrites[=[<n>][/<m>]] # Break complete rewrite changes into pairs of delete and create
-M[<n>], --find-renames[=<n>] # If generating diffs, detect and report renames for each commit
-C[<n>], --find-copies[=<n>] # Detect copies as well as renames
--find-copies-harder # For performance reasons, by default, -C option finds copies only if the original file of the copy was modified in the same changeset
-D, --irreversible-delete # Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and /dev/null
-l<num> # The -M and -C options require O(n^2) processing time where n is the number of potential rename/copy targets
--diff-filter=[(A|C|D|M|R|T|U|X|B)...[*]] # Select only files that are Added (A), Copied (C), Deleted (D), Modified (M), Renamed (R), have their type (i.e. regular file, symlink, submodule, ...) changed (T), are Unmerged (U), are Unknown (X), or have had their pairing Broken (B)
-S<string> # Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file. Intended for the scripter’s use
-G<regex> # Look for differences whose patch text contains added/removed lines that match <regex>
--find-object=<object-id> # Look for differences that change the number of occurrences of the specified object
--pickaxe-all # When -S or -G finds a change, show all the changes in that changeset, not just the files that contain the change in <string>
--pickaxe-regex # Treat the <string> given to -S as an extended POSIX regular expression to match
-O<orderfile> # Control the order in which files appear in the output
<orderfile> is parsed as follows:
- Blank lines are ignored, so they can be used as separators for readability
- Lines starting with a hash ("#") are ignored, so they can be used for comments. Add a backslash ("\\") to the beginning of the pattern if it starts with a hash
- Each other line contains a single pattern
-R # Swap two inputs; that is, show differences from index or on-disk file to tree contents
--relative[=<path>] # When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option
-a, --text # Treat all files as text
--ignore-cr-at-eol # Ignore carriage-return at the end of line when doing a comparison
--ignore-space-at-eol # Ignore changes in whitespace at EOL
-b, --ignore-space-change # Ignore changes in amount of whitespace
-w, --ignore-all-space # Ignore whitespace when comparing lines
--ignore-blank-lines # Ignore changes whose lines are all blank
--inter-hunk-context=<lines> # Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other
-W, --function-context # Show whole surrounding functions of changes
--ext-diff # Allow an external diff helper to be executed
--no-ext-diff # Disallow external diff drivers
--textconv, --no-textconv # Allow (or disallow) external text conversion filters to be run when comparing binary files
--ignore-submodules[=<when>] # Ignore changes to submodules in the diff generation
--src-prefix=<prefix> # Show the given source prefix instead of "a/"
--dst-prefix=<prefix> # Show the given destination prefix instead of "b/"
--no-prefix # Do not show any source or destination prefix
--line-prefix=<prefix> # Prepend an additional prefix to every line of output
--ita-invisible-in-index # By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached"
EXAMPLES
git log --no-merges
Show the whole commit history, but skip any merges
git log v2.6.12.. include/scsi drivers/scsi
Show all commits since version v2.6.12 that changed any file in the include/scsi or drivers/scsi subdirectories
git log --since="2 weeks ago" -- gitk
Show the changes during the last two weeks to the file gitk. The -- is necessary to avoid confusion with the branch named gitk
git log --name-status release..test
Show the commits that are in the "test" branch but not yet in the "release" branch, along with the list of paths each commit modifies
git log --follow builtin/rev-list.c
Shows the commits that changed builtin/rev-list.c, including those commits that occurred before the file was given its present name
git log --branches --not --remotes=origin
Shows all commits that are in any of local branches but not in any of remote-tracking branches for origin (what you have that origin doesn’t)
git log master --not --remotes=*/master
Shows all commits that are in local master but not in any remote repository master branches
git log -p -m --first-parent
Shows the history including change diffs, but only from the “main branch” perspective, skipping commits that come from merged branches, and showing full diffs of changes introduced by the merges. This makes sense only when following
a strict policy of merging all topic branches when staying on a single integration branch
git log -L '/int main/',/^}/:main.c
Shows how the function main() in the file main.c evolved over time
git log -3
Limits the number of commits to show to 3
TRICK
# show all logs
git log
-p -2 # show diff between two last commit
-U1 --word-diff # show diff in line
--stat # show statistics
--pretty=oneline
--pretty=short
--pretty=full
--pretty=fuller
--pretty=format:"%h - %an, %ar : %s"
https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging
Join two or more development histories together
Incorporates changes from the named commits (since the time their histories diverged from the current branch) into the current branch. This command is used by git pull to incorporate changes from another repository and can be used by hand to merge changes from one branch into another
git merge [-n] [--stat] [--no-commit] [--squash] [--[no-]edit] [--no-verify] [-s <strategy>] [-X <strategy-option>] [-S[<keyid>]] [--[no-]allow-unrelated-histories] [--[no-]rerere-autoupdate] [-m <msg>] [-F <file>] [<commit>...]
git merge (--continue | --abort | --quit)
--commit # Perform the merge and commit the result
--no-commit # perform the merge and stop just before creating a merge commit, to give the user a chance to inspect and further tweak the merge result before committing
--edit, -e # Invoke an editor before committing successful mechanical merge to further edit the auto-generated merge message, so that the user can explain and justify the merge
--no-edit # Accept the auto-generated message (this is generally discouraged)
--cleanup=<mode> # Determines how the merge message will be cleaned up before committing
--ff, --no-ff, --ff-only # Specifies how a merge is handled when the merged-in history is already a descendant of the current history
- --ff # is the default unless merging an annotated (and possibly signed) tag that is not stored in its natural place in the refs/tags/ hierarchy, in which case --no-ff is assumed
--no-ff # create a merge commit in all cases, even when the merge could instead be resolved as a fast-forward
--ff-only # resolve the merge as a fast-forward when possible. When not possible, refuse to merge and exit with a non-zero status
-S[<keyid>], --gpg-sign[=<keyid>] # GPG-sign the resulting merge commit. The keyid argument is optional and defaults to the committer identity
--log[=<n>], --no-log # In addition to branch names, populate the log message with one-line descriptions from at most <n> actual commits that are being merged
--no-log # do not list one-line descriptions from the actual commits being merged
--signoff, --no-signoff # Add Signed-off-by line by the committer at the end of the commit log message
--stat # Show a diffstat at the end of the merge
-n, --no-stat # do not show a diffstat at the end of the merge
--squash # produce the working tree and index state as if a real merge happened
--no-squash # perform the merge and commit the result
--verify-signatures, --no-verify-signatures # verify that the tip commit of the side branch being merged is signed with a valid key
-q, --quiet # Operate quietly. Implies --no-progress
-v, --verbose # be verbose
--progress, --no-progress # Turn progress on/off explicitly
--allow-unrelated-histories # by default, git merge command refuses to merge histories that do not share a common ancestor
-m <msg> # set the commit message to be used for the merge commit (in case one is created)
--continue # after a git merge stops due to conflicts you can conclude the merge by running git merge --continue
-s <strategy>, --strategy=<strategy> # use the given merge strategy
- resolve # this can only resolve two heads
recursive # this can only resolve two heads using a 3-way merge algorithm
- options:
ours # forces conflicting hunks to be auto-resolved cleanly by favoring our version
theirs # opposite of ours; note that, unlike ours, there is no theirs merge strategy to confuse this merge option with
patience # merge-recursive spends a little extra time to avoid mismerges that sometimes occur due to unimportant matching lines
diff-algorithm=[patience|minimal|histogram|myers] # use a different diff algorithm, which can help avoid mismerges that occur due to unimportant matching lines
ignore-space-change #
ignore-all-space
ignore-space-at-eol
ignore-cr-at-eol
renormalize # runs a virtual check-out and check-in of all three stages of a file when resolving a three-way merge
no-renormalize # disables the renormalize option
no-renames # turn off rename detection
find-renames[=<n>] # turn on rename detection, optionally setting the similarity threshold
subtree[=<path>] # is a more advanced form of subtree strategy, where the strategy makes a guess on how two trees must be shifted to match with each other when merging
octopus # resolves cases with more than two heads, but refuses to do a complex merge that needs manual resolution
ours # resolves any number of heads, but the resulting tree of the merge is always that of the current branch head, effectively ignoring all changes from all other branches
subtree # This is a modified recursive strategy. When merging trees A and B, if B corresponds to a subtree of A, B is first adjusted to match the tree structure of A, instead of reading the trees at the samelevel
Switch branches or restore working tree files
# To prepare for working on <branch>, switch to it by updating the index and the files in the working tree, and by pointing HEAD at the branch
git checkout [<branch>]
#Specifying -b causes a new branch to be created as if git-branch(1) were called and then checked out
git checkout -b|-B <new_branch> [<start point>]
# Prepare to work on top of <commit>, by detaching HEAD at it (see "DETACHED HEAD" section), and updating the index and the files in the working tree
git checkout --detach [<branch>], git checkout [--detach] <commit>
# Overwrite the contents of the files that match the pathspec. When the <tree-ish> (most often a commit) is not given, overwrite working tree with the contents in the index
git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] [--] <pathspec>..., git checkout [-f|--ours|--theirs|-m|--conflict=<style>] [<tree-ish>] --pathspec-from-file=<file> [--pathspec-file-nul]
# similar to the previous mode, but lets you use the interactive interface to show the "diff" output and choose which hunks to use in the result
git checkout (-p|--patch) [<tree-ish>] [--] [<pathspec>...]
-q, --quiet # Quiet, suppress feedback messages
--progress, --no-progress # Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --quiet is specified
-f, --force # When switching branches, proceed even if the index or the working tree differs from HEAD. This is used to throw away local changes
--ours, --theirs # When checking out paths from the index, check out stage #2 (ours) or #3 (theirs) for unmerged paths
-b <new_branch> # Create a new branch named <new_branch> and start it at <start_point>
-B <new_branch> # Creates the branch <new_branch> and start it at <start_point>; if it already exists, then reset it to <start_point>. This is equivalent to running "git branch" with "-f"
-t, --track # When creating a new branch, set up "upstream" configuration. See "--track" in git-branch(1) for details. If no -b option is given, the name of the new branch will be derived from the remote-tracking branch, by looking at the local part of the refspec configured for the corresponding remote, and then stripping the initial part up to the "*"
--no-track # Do not set up "upstream" configuration, even if the branch.autoSetupMerge configuration variable is true
--guess, --no-guess # If <branch> is not found but there does exist a tracking branch in exactly one remote (call it <remote>) with a matching name, treat as equivalent to $ git checkout -b <branch> --track <remote>/<branch>
-l # Create the new branch’s reflog
--detach # Rather than checking out a branch to work on it, check out a commit for inspection and discardable experiments
--orphan <new_branch> # Create a new orphan branch, named <new_branch>, started from <start_point> and switch to it
--ignore-skip-worktree-bits # In sparse checkout mode, git checkout -- <paths> would update only entries matched by <paths> and sparse patterns in $GIT_DIR/info/sparse-checkout
-m, --merge # a three-way merge between the current branch, your working tree contents, and the new branch is done, and you will be on the new branch
--conflict=<style> # The same as --merge option above, but changes the way the conflicting hunks are presented, overriding the merge.conflictStyle configuration variable
-p, --patch # Interactively select hunks in the difference between the <tree-ish> (or the index, if unspecified) and the working tree
--ignore-other-worktrees # git checkout refuses when the wanted ref is already checked out by another worktree. This option makes it check the ref out anyway. In other words, the ref can be held by more than one worktree
--overwrite-ignore, --no-overwrite-ignore # Silently overwrite ignored files when switching branches. This is the default behavior. Use --no-overwrite-ignore to abort the operation when the new branch contains ignored files
--recurse-submodules, --no-recurse-submodules # Using --recurse-submodules will update the content of all initialized submodules according to the commit recorded in the superproject
--overlay, --no-overlay # In the default overlay mode, git checkout never removes files from the index or the working tree
--pathspec-from-file=<file> # Pathspec is passed in <file> instead of commandline args
--pathspec-file-nul # Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes)
<branch> # Branch to checkout; if it refers to a branch (i.e., a name that, when prepended with "refs/heads/", is a valid ref), then that branch is checked out. Otherwise, if it refers to a valid commit, your HEAD becomes "detached" and you are no longer on any branch
<new_branch> # Name for the new branch
<start_point> # The name of a commit at which to start the new branch. Defaults to HEAD
<tree-ish> # Tree to checkout from (when paths are given). If not specified, the index will be used
-- # Do not interpret any more arguments as options
<pathspec>... # Limits the paths affected by the operation
USED
checkout a remote branch
In order to checkout a remote branch you have to first fetch the contents of the branch
git fetch --all
# or
# git fetch <repo> <branch>
Checkout the remote branch
# In modern versions of Git, you can then checkout the remote branch like a local branch
git checkout <remotebranch>
# Older versions of Git require the creation of a new branch based on the remote
git checkout <remotebranch> <repo>/<remotebranch>
Checkout the local branch
# Additionally you can checkout a new local branch and reset it to the remote branches last commit
git checkout -b <branch> && git reset --hard origin/<branch>
List, create, or delete branches
If --list is given, or if there are no non-option arguments, existing branches are listed; the current branch will be highlighted in green and marked with an asterisk. Any branches checked out in linked worktrees will be highlighted in cyan and marked with a plus sign. Option -r causes the remote-tracking branches to be listed, and option -a shows both local and remote branches
If a <pattern> is given, it is used as a shell wildcard to restrict the output to matching branches. If multiple patterns are given, a branch is shown if it matches any of the patterns
git branch [--color[=<when>] | --no-color] [--show-current] [-v [--abbrev=<length> | --no-abbrev]] [--column[=<options>] | --no-column] [--sort=<key>] [(--merged | --no-merged) [<commit>]] [--contains [<commit]] [--no-contains [<commit>]] [--points-at <object>] [--format=<format>] [(-r | --remotes) | (-a | --all)] [--list] [<pattern>...]
git branch [--track | --no-track] [-f] <branchname> [<start-point>]
git branch (--set-upstream-to=<upstream> | -u <upstream>) [<branchname>]
git branch --unset-upstream [<branchname>]
git branch (-m | -M) [<oldbranch>] <newbranch>
git branch (-c | -C) [<oldbranch>] <newbranch>
git branch (-d | -D) [-r] <branchname>...
git branch --edit-description [<branchname>]
-d, --delete # Delete a branch. The branch must be fully merged in its upstream branch, or in HEAD if no upstream was set with --track or --set-upstream-to
-D # Shortcut for --delete --force
--create-reflog # Create the branch’s reflog. This activates recording of all changes made to the branch ref, enabling use of date based sha1 expressions such as "<branchname>@{yesterday}"
-f, --force # Reset <branchname> to <startpoint>, even if <branchname> exists already. Without -f, git branch refuses to change an existing branch. In combination with -d (or --delete), allow deleting the branch irrespective of its merged status. In combination with -m (or --move), allow renaming the branch even if the new branch name already exists, the same applies for -c (or --copy)
-m, --move # Move/rename a branch and the corresponding reflog
-M # Shortcut for --move --force
-c, --copy # Copy a branch and the corresponding reflog
-C # Shortcut for --copy --force
--color[=<when>] # Color branches to highlight current, local, and remote-tracking branches. The value must be always (the default), never, or auto
--no-color # Turn off branch colors, even when the configuration file gives the default to color output. Same as --color=never
-i, --ignore-case # Sorting and filtering branches are case insensitive
--column[=<options>], --no-column # Display branch listing in columns. See configuration variable column.branch for option syntax
-r, --remotes # List or delete (if used with -d) the remote-tracking branches. Combine with --list to match the optional pattern(s)
-a, --all # List both remote-tracking branches and local branches. Combine with --list to match optional pattern(s)
-l, --list # List branches. With optional <pattern>..., e.g. git branch --list 'maint-*', list only the branches that match the pattern(s)
--show-current # Print the name of the current branch. In detached HEAD state, nothing is printed
-v, -vv, --verbose # When in list mode, show sha1 and commit subject line for each head, along with relationship to upstream branch (if any)
-q, --quiet # Be more quiet when creating or deleting a branch, suppressing non-error messages
--abbrev=<length> # Alter the sha1’s minimum display length in the output listing. The default value is 7 and can be overridden by the core.abbrev config option
--no-abbrev # Display the full sha1s in the output listing rather than abbreviating them
-t, --track # When creating a new branch, set up branch.<name>.remote and branch.<name>.merge configuration entries to mark the start-point branch as "upstream" from the new branch
--no-track # Do not set up "upstream" configuration, even if the branch.autoSetupMerge configuration variable is true
-u <upstream>, --set-upstream-to=<upstream> # Set up <branchname>'s tracking information so <upstream> is considered <branchname>'s upstream branch. If no <branchname> is specified, then it defaults to the current branch
--unset-upstream # Remove the upstream information for <branchname>. If no branch is specified it defaults to the current branch
--edit-description # Open an editor and edit the text to explain what the branch is for, to be used by various other commands (e.g. format-patch, request-pull, and merge (if enabled)). Multi-line explanations may be used
--contains [<commit>] # Only list branches which contain the specified commit (HEAD if not specified). Implies --list
--no-contains [<commit>] # Only list branches which don’t contain the specified commit (HEAD if not specified). Implies --list
--merged [<commit>] # Only list branches whose tips are reachable from the specified commit (HEAD if not specified). Implies --list, incompatible with --no-merged
--no-merged [<commit>] # Only list branches whose tips are not reachable from the specified commit (HEAD if not specified). Implies --list, incompatible with --merged
<branchname> # The name of the branch to create or delete. The new branch name must pass all checks defined by git-check-ref-format(1). Some of these checks may restrict the characters allowed in a branch name
<start-point> # The new branch head will point to this commit. It may be given as a branch name, a commit-id, or a tag. If this option is omitted, the current HEAD will be used instead
<oldbranch> # The name of an existing branch to rename
<newbranch> # The new name for an existing branch. The same restrictions as for <branchname> apply
--sort=<key> # Sort based on the key given. Prefix - to sort in descending order of the value. You may use the --sort=<key> option multiple times, in which case the last key becomes the primary key
--points-at <object> # Only list branches of the given object
--format <format> # A string that interpolates %(fieldname) from a branch ref being shown and the object it points at. The format is the same as that of git-for-each-ref(1)
Show changes between commits, commit and working tree, etc
Show changes between the working tree and the index or a tree, changes between the index and a tree, changes between two trees, changes between two blob objects, or changes between two files on disk
# view the changes you made relative to the index (staging area for the next commit)
git diff [<options>] [--] [<path>...]
# compare the given two paths on the filesystem
git diff [<options>] --no-index [--] <path> <path>
# view the changes you staged for the next commit relative to the named <commit>
git diff [<options>] --cached [<commit>] [--] [<path>...]
# view the changes you have in your working tree relative to the named <commit>
git diff [<options>] <commit> [--] [<path>...]
# view the changes between two arbitrary <commit>
git diff [<options>] <commit> <commit> [--] [<path>...]
git diff [<options>] <commit>..<commit> [--] [<path>...]
# view the changes on the branch containing and up to the second <commit>, starting at a common ancestor of both <commit>
git diff [<options>] <commit>...<commit> [--] [<path>...]
-p, -u, --patch # Generate patch (see section on generating patches). This is the default
-s, --no-patch # Suppress diff output. Useful for commands like git show that show the patch by default, or to cancel the effect of --patch
-U<n>, --unified=<n> # Generate diffs with <n> lines of context instead of the usual three. Implies --patch. Implies -p
--output=<file> # Output to a specific file instead of stdout
--output-indicator-new=<char>, --output-indicator-old=<char>, --output-indicator-context=<char> # Specify the character used to indicate new, old or context lines in the generated patch. Normally they are +, - and ' ' respectively
--raw # Generate the diff in raw format
--patch-with-raw # Synonym for -p --raw
--indent-heuristic # Enable the heuristic that shifts diff hunk boundaries to make patches easier to read. This is the default
--no-indent-heuristic # Disable the indent heuristic
--minimal # Spend extra time to make sure the smallest possible diff is produced
--patience # Generate a diff using the "patience diff" algorithm
--histogram # Generate a diff using the "histogram diff" algorithm
--anchored=<text> # Generate a diff using the "anchored diff" algorithm
--diff-algorithm={patience|minimal|histogram|myers} # Choose a diff algorithm. The variants are as follows:
default, myers # The basic greedy diff algorithm. Currently, this is the default
minimal # Spend extra time to make sure the smallest possible diff is produced
patience # Use "patience diff" algorithm when generating patches
histogram # This algorithm extends the patience algorithm to "support low-occurrence common elements"
--stat[=<width>[,<name-width>[,<count>]]] # Generate a diffstat
--compact-summary # Output a condensed summary of extended header information such as file creations or deletions ("new" or "gone", optionally "+l" if it’s a symlink) and mode changes ("+x" or "-x" for adding or removing executable bit respectively) in diffstat
--numstat # Similar to --stat, but shows number of added and deleted lines in decimal notation and pathname without abbreviation, to make it more machine friendly
--shortstat # Output only the last line of the --stat format containing total number of modified files, as well as number of added and deleted lines
-X[<param1,param2,...>], --dirstat[=<param1,param2,...>] # Output the distribution of relative amount of changes for each sub-directory. The behavior of --dirstat can be customized by passing it a comma separated list of parameters. The following parameters are available:
changes # # Compute the dirstat numbers by counting the lines that have been removed from the source, or added to the destination
lines # Compute the dirstat numbers by doing the regular line-based diff analysis, and summing the removed/added line counts
files # Compute the dirstat numbers by counting the number of files changed. Each changed file counts equally in the dirstat analysis
cumulative # Count changes in a child directory for the parent directory as well
<limit> # An integer parameter specifies a cut-off percent (3% by default)
--dirstat-by-file[=<param1,param2>...] # Synonym for --dirstat=files,param1,param2...
--summary # Output a condensed summary of extended header information such as creations, renames and mode changes
--patch-with-stat # Synonym for -p --stat
-z # When --raw, --numstat, --name-only or --name-status has been given, do not munge pathnames and use NULs as output field terminators
--name-only # Show only names of changed files
--name-status # Show only names and status of changed files. See the description of the --diff-filter option on what the status letters mean
--submodule[=<format>] # Specify how differences in submodules are shown. When specifying --submodule=short the short format is used
--color[=<when>] # Show colored diff. --color (i.e. without =<when>) is the same as --color=always
--no-color # Turn off colored diff. This can be used to override configuration settings. It is the same as --color=never
--color-moved[=<mode>] # Moved lines of code are colored differently. <mode> defaults to no. The mode must be one of:
no # Moved lines are not highlighted
default # Is a synonym for zebra. This may change to a more sensible mode in the future
plain # Any line that is added in one location and was removed in another location will be colored with color.diff.newMoved
blocks # Blocks of moved text of at least 20 alphanumeric characters are detected greedily
zebra # Blocks of moved text are detected as in blocks mode
dimmed-zebra # Similar to zebra, but additional dimming of uninteresting parts of moved code is performed
--no-color-moved-ws # Do not ignore whitespace when performing move detection. This can be used to override configuration settings
--word-diff[=<mode>] # Show a word diff, using the <mode> to delimit changed words. The <mode> defaults to plain, and must be one of:
color # Highlight changed words using only colors. Implies --color
plain # Show words as [-removed-] and {+added+}
porcelain # Use a special line-based format intended for script consumption
none # Disable word diff again
--word-diff-regex=<regex> # Use <regex> to decide what a word is, instead of considering runs of non-whitespace to be a word
--color-words[=<regex>] # Equivalent to --word-diff=color plus (if a regex was specified) --word-diff-regex=<regex>
--no-renames # Turn off rename detection, even when the configuration file gives the default to do so
--[no-]rename-empty # Whether to use empty blobs as rename source
--check # Warn if changes introduce conflict markers or whitespace errors. What are considered whitespace errors is controlled by core
--ws-error-highlight=<kind> # Highlight whitespace errors in the context, old or new lines of the diff
--full-index # Instead of the first handful of characters, show the full pre- and post-image blob object names on the "index" line when generating patch format output
--binary # In addition to --full-index, output a binary diff that can be applied with git-apply. Implies --patch
--abbrev[=<n>] # Instead of showing the full 40-byte hexadecimal object name in diff-raw format output and diff-tree header lines, show only a partial prefix
-B[<n>][/<m>], --break-rewrites[=[<n>][/<m>]] # Break complete rewrite changes into pairs of delete and create
-M[<n>], --find-renames[=<n>] # Detect renames. If n is specified, it is a threshold on the similarity index (i.e. amount of addition/deletions compared to the file’s size)
-C[<n>], --find-copies[=<n>] # Detect copies as well as renames
--find-copies-harder # For performance reasons, by default, -C option finds copies only if the original file of the copy was modified in the same changeset
-D, --irreversible-delete # Omit the preimage for deletes, i.e. print only the header but not the diff between the preimage and /dev/null
-l<num> # The -M and -C options require O(n^2) processing time where n is the number of potential rename/copy targets
--diff-filter=[(A|C|D|M|R|T|U|X|B)...[*]] # Select only files that are Added (A), Copied (C), Deleted (D), Modified (M), Renamed (R), have their type (i.e. regular file, symlink, submodule, ...) changed (T), are Unmerged (U), are Unknown (X), or have had their pairing Broken (B)
-S<string> # Look for differences that change the number of occurrences of the specified string (i.e. addition/deletion) in a file
-G<regex> # Look for differences whose patch text contains added/removed lines that match <regex>
--find-object=<object-id> # Look for differences that change the number of occurrences of the specified object
--pickaxe-all # When -S or -G finds a change, show all the changes in that changeset, not just the files that contain the change in <string>
--pickaxe-regex # Treat the <string> given to -S as an extended POSIX regular expression to match
-O<orderfile> # Control the order in which files appear in the output. <orderfile> is parsed as follows:
- Blank lines are ignored, so they can be used as separators for readability
- Lines starting with a hash ("#") are ignored, so they can be used for comments. Add a backslash ("\\") to the beginning of the pattern if it starts with a hash
- Each other line contains a single pattern
-R # Swap two inputs; that is, show differences from index or on-disk file to tree contents
--relative[=<path>] # When run from a subdirectory of the project, it can be told to exclude changes outside the directory and show pathnames relative to it with this option
-a, --text # Treat all files as text
--ignore-cr-at-eol # Ignore carriage-return at the end of line when doing a comparison
--ignore-space-at-eol # Ignore changes in whitespace at EOL
--ignore-space-at-eol # Ignore changes in whitespace at EOL
-b, --ignore-space-change # Ignore changes in amount of whitespace
-w, --ignore-all-space # Ignore whitespace when comparing lines
--ignore-blank-lines # Ignore changes whose lines are all blank
--inter-hunk-context=<lines> # Show the context between diff hunks, up to the specified number of lines, thereby fusing hunks that are close to each other
-W, --function-context # Show whole surrounding functions of changes
--exit-code # Make the program exit with codes similar to diff(1). That is, it exits with 1 if there were differences and 0 means no differences
--quiet # Disable all output of the program. Implies --exit-code
--ext-diff # Allow an external diff helper to be executed. If you set an external diff driver with gitattributes(5), you need to use this option with git-log(1) and friends
--no-ext-diff # Disallow external diff drivers
--textconv, --no-textconv # Allow (or disallow) external text conversion filters to be run when comparing binary files
--ignore-submodules[=<when>] # Ignore changes to submodules in the diff generation
--src-prefix=<prefix> # Show the given source prefix instead of "a/"
--dst-prefix=<prefix> # Show the given destination prefix instead of "b/"
--no-prefix # Do not show any source or destination prefix
--line-prefix=<prefix> # Prepend an additional prefix to every line of output
--ita-invisible-in-index # By default entries added by "git add -N" appear as an existing empty file in "git diff" and a new file in "git diff --cached"
-1 --base, -2 --ours, -3 --theirs # Compare the working tree with the "base" version (stage #1), "our branch" (stage #2) or "their branch" (stage #3)
-0 # Omit diff output for unmerged entries and just show "Unmerged". Can be used only when comparing the working tree with the index
<path>... # The <paths> parameters, when given, are used to limit the diff to the named paths (you can give directory names and get diff for all files under them)
USED
# show diff staged / repo
git diff --staged
Move or rename a file, a directory, or a symlink
# renames <source>
git mv [-v] [-f] [-n] [-k] <source> <destination>
# the last argument has to be an existing directory; the given sources will be moved into this directory
git mv [-v] [-f] [-n] [-k] <source> ... <destination directory>
-f, --force # Force renaming or moving of a file even if the target exists
-k # Skip move or rename actions which would lead to an error condition. An error happens when a source is neither existing nor controlled by Git, or when it would overwrite an existing file unless -f is given.
-n, --dry-run # Do nothing; only show what would happen
-v, --verbose # Report the names of files as they are moved.
Remove files from the working tree and from the index
Remove files from the index, or from the working tree and the index. git rm will not remove a file from just your working directory. (There is no option to remove a file only from the working tree and yet keep it in the index; use /bin/rm if you want to do that.) The files being removed have to be identical to the tip of the branch, and no updates to their contents can be staged in the index, though that default behavior can be overridden with the -f option. When --cached is given, the staged content has to match either the tip of the branch or the file on disk, allowing the file to be removed from just the index
git rm [-f | --force] [-n] [-r] [--cached] [--ignore-unmatch] [--quiet] [--] <file>...
<file>... # Files to remove. Fileglobs (e.g. *.c) can be given to remove all matching files. If you want Git to expand file glob characters, you may need to shell-escape them
-f, --force # Override the up-to-date check
-n, --dry-run # Don’t actually remove any file(s). Instead, just show if they exist in the index and would otherwise be removed by the command
-r # Allow recursive removal when a leading directory name is given
-- # This option can be used to separate command-line options from the list of files, (useful when filenames might be mistaken for command-line options)
--cached # Use this option to unstage and remove paths only from the index. Working tree files, whether modified or not, will be left alone
--ignore-unmatch # Exit with a zero status even if no files matched
-q, --quiet # git rm normally outputs one line (in the form of an rm command) for each file removed. This option suppresses that output
Reset current HEAD to the specified state
In the first three forms, copy entries from <tree-ish> to the index. In the last form, set the current branch head (HEAD) to <commit>, optionally modifying index and working tree to match. The <tree-ish>/<commit> defaults to HEAD in all forms
# These forms reset the index entries for all paths that match the <pathspec> to their state at <tree-ish>
git reset [-q] [<tree-ish>] [--] <pathspec>..., git reset [-q] [--pathspec-from-file=<file> [--pathspec-file-nul]] [<tree-ish>]
# Interactively select hunks in the difference between the index and <tree-ish> (defaults to HEAD). The chosen hunks are applied in reverse to the index. This means that git reset -p is the opposite of git add -p
git reset (--patch | -p) [<tree-ish>] [--] [<pathspec>...]
# This form resets the current branch head to <commit> and possibly updates the index (resetting it to the tree of <commit>) and the working tree depending on <mode>
git reset [<mode>] [<commit>]
--soft # Does not touch the index file or the working tree at all (but resets the head to <commit>, just like all modes do)
--mixed # Resets the index but not the working tree (i.e., the changed files are preserved but not marked for commit) and reports what has not been updated
--hard # Resets the index and working tree. Any changes to tracked files in the working tree since <commit> are discarded
--merge # Resets the index and updates the files in the working tree that are different between <commit> and HEAD, but keeps those which are different between the index and working tree (i.e. which have changes which have not been added)
--keep # Resets index entries and updates files in the working tree that are different between <commit> and HEAD
-q, --quiet, --no-quiet # Be quiet, only report errors. The default behavior is set by the reset.quiet config option. --quiet and --no-quiet will override the default behavior
--pathspec-from-file=<file> # Pathspec is passed in <file> instead of commandline args. If <file> is exactly - then standard input is used
--pathspec-file-nul # Only meaningful with --pathspec-from-file. Pathspec elements are separated with NUL character and all other characters are taken literally (including newlines and quotes)
-- # Do not interpret any more arguments as options
<pathspec>... # Limits the paths affected by the operation
Add file contents to the index
This command updates the index using the current content found in the working tree, to prepare the content staged for the next commit. It typically adds the current content of existing paths as a whole, but with some options it can also be used to add content with only part of the changes made to the working tree files applied, or remove paths that do not exist in the working tree anymore
git add [--verbose | -v] [--dry-run | -n] [--force | -f] [--interactive | -i] [--patch | -p] [--edit | -e] [--[no-]all | --[no-]ignore-removal | [--update | -u]] [--intent-to-add | -N] [--refresh] [--ignore-errors] [--ignore-missing] [--renormalize] [--chmod=(+|-)x] [--pathspec-from-file=<file> [--pathspec-file-nul]] [--] [<pathspec>...]
<pathspec>... # Files to add content from. Fileglobs (e.g. *.c) can be given to add all matching files
-n, --dry-run # Don’t actually add the file(s), just show if they exist and/or will be ignored
-v, --verbose # Be verbose
-f, --force # Allow adding otherwise ignored files
-i, --interactive # Add modified contents in the working tree interactively to the index. Optional path arguments may be supplied to limit operation to a subset of the working tree
-p, --patch # Interactively choose hunks of patch between the index and the work tree and add them to the index
-e, --edit # Open the diff vs. the index in an editor and let the user edit it
-u, --update # Update the index just where it already has an entry matching <pathspec>
-A, --all, --no-ignore-removal # Update the index not only where the working tree has a file matching <pathspec> but also where the index already has an entry
--no-all, --ignore-removal # Update the index by adding new files that are unknown to the index and files modified in the working tree, but ignore files that have been removed from the working tree
-N, --intent-to-add # Record only the fact that the path will be added later. An entry for the path is placed in the index with no content
--refresh # Don’t add the file(s), but only refresh their stat() information in the index
--ignore-errors # If some files could not be added because of errors indexing them, do not abort the operation, but continue adding the others
--ignore-missing # This option can only be used together with --dry-run
--no-warn-embedded-repo # By default, git add will warn when adding an embedded repository to the index without using git submodule add to create an entry in .gitmodules
--renormalize # Apply the "clean" process freshly to all tracked files to forcibly add them again to the index
--chmod=(+|-)x # Override the executable bit of the added files
--pathspec-from-file=<file> # Pathspec is passed in <file> instead of commandline args
--pathspec-file-nul # Only meaningful with --pathspec-from-file
-- # This option can be used to separate command-line options from the list of files, (useful when filenames might be mistaken for command-line options)
https://chris.beams.io/posts/git-commit/
Record changes to the repository
Create a new commit containing the current contents of the index and the given log message describing the changes. The new commit is a direct child of HEAD, usually the tip of the current branch, and the branch is updated to point to it (unless no branch is associated with the working tree, in which case HEAD is "detached" as described in git-checkout(1))
git commit [-a | --interactive | --patch] [-s] [-v] [-u<mode>] [--amend] [--dry-run] [(-c | -C | --fixup | --squash) <commit>] [-F <file> | -m <msg>] [--reset-author] [--allow-empty] [--allow-empty-message] [--no-verify] [-e] [--author=<author>] [--date=<date>] [--cleanup=<mode>] [--[no-]status] [-i | -o] [--pathspec-from-file=<file> [--pathspec-file-nul]] [-S[<keyid>]] [--] [<pathspec>...]
-a, --all # Tell the command to automatically stage files that have been modified and deleted, but new files you have not told Git about are not affected
-p, --patch # Use the interactive patch selection interface to chose which changes to commit. See git-add(1) for details
-C <commit>, --reuse-message=<commit> # Take an existing commit object, and reuse the log message and the authorship information (including the timestamp) when creating the commit
-c <commit>, --reedit-message=<commit> # Like -C, but with -c the editor is invoked, so that the user can further edit the commit message
--fixup=<commit> # Construct a commit message for use with rebase --autosquash
--squash=<commit> # Construct a commit message for use with rebase --autosquash
--reset-author # When used with -C/-c/--amend options, or when committing after a conflicting cherry-pick, declare that the authorship of the resulting commit now belongs to the committer
--short # When doing a dry-run, give the output in the short-format. See git-status(1) for details. Implies --dry-run
--branch # Show the branch and tracking info even in short-format
--porcelain # When doing a dry-run, give the output in a porcelain-ready format. See git-status(1) for details. Implies --dry-run
--long # When doing a dry-run, give the output in the long-format. Implies --dry-run
-z, --null # When showing short or porcelain status output, print the filename verbatim and terminate the entries with NUL, instead of LF
-F <file>, --file=<file> # Take the commit message from the given file. Use - to read the message from the standard input
--author=<author> # Override the commit author
--date=<date> # Override the author date used in the commit
-m <msg>, --message=<msg> # Use the given <msg> as the commit message
-t <file>, --template=<file> # When editing the commit message, start the editor with the contents in the given file
-s, --signoff # Add Signed-off-by line by the committer at the end of the commit log message
-n, --no-verify # This option bypasses the pre-commit and commit-msg hooks
--allow-empty # Usually recording a commit that has the exact same tree as its sole parent commit is a mistake, and the command prevents you from making such a commit
--allow-empty-message # Like --allow-empty this command is primarily for use by foreign SCM interface scripts
--cleanup=<mode> # This option determines how the supplied commit message should be cleaned up before committing. The <mode> can be strip, whitespace, verbatim, scissors or default
strip # Strip leading and trailing empty lines, trailing whitespace, commentary and collapse consecutive empty lines
whitespace # Same as strip except #commentary is not removed
verbatim # Do not change the message at all
scissors # Same as whitespace except that everything from (and including) the line found below is truncated, if the message is to be edited. "#" can be customized with core.commentChar
default # Same as strip if the message is to be edited. Otherwise whitespace
-e, --edit # The message taken from file with -F, command line with -m, and from commit object with -C are usually used as the commit log message unmodified
--no-edit # Use the selected commit message without launching an editor
--amend # Replace the tip of the current branch by creating a new commit
--no-post-rewrite # Bypass the post-rewrite hook
-i, --include # Before making a commit out of staged contents so far, stage the contents of paths given on the command line as well
-o, --only # Make a commit by taking the updated working tree contents of the paths specified on the command line, disregarding any contents that have been staged for other paths
--pathspec-from-file=<file> # Pathspec is passed in <file> instead of commandline args
--pathspec-file-nul # Only meaningful with --pathspec-from-file
-u[<mode>], --untracked-files[=<mode>] # Show untracked files. The mode parameter is optional (defaults to all). The possible options are:
no # Show no untracked files
normal # Shows untracked files and directories
all # Also shows individual files in untracked directories
-v, --verbose # Show unified diff between the HEAD commit and what would be committed at the bottom of the commit message template to help the user describe the commit by reminding what changes the commit has
-q, --quiet # Suppress commit summary message
--dry-run # Do not create a commit, but show a list of paths that are to be committed, paths with local changes that will be left uncommitted and paths that are untracked
--status # Include the output of git-status(1) in the commit message template when using an editor to prepare the commit message
--no-status # Do not include the output of git-status(1) in the commit message template when using an editor to prepare the default commit message
-S[<keyid>], --gpg-sign[=<keyid>] # GPG-sign commits. The keyid argument is optional and defaults to the committer identity; if specified,
--no-gpg-sign # Countermand commit.gpgSign configuration variable that is set to force each and every commit to be signed
-- # Do not interpret any more arguments as options
<pathspec>... # When pathspec is given on the command line, commit the contents of the files that match the pathspec without recording the changes already added to the index
USED
git commit -m "initial version - $(date +%Y-%m-%d)" # commit with message
git commit -a -m "$message" # commit unstaged files in working copy to the repo
SHOW
Show a reference to an object or a list of objects may be passed to examine those specific objects
Shows one or more objects (blobs, trees, tags and commits)
- For commits it shows the log message and textual diff. It also presents the merge commit in a special format as produced by git diff-tree --cc
- For tags, it shows the tag message and the referenced objects
- For trees, it shows the names (equivalent to git ls-tree with --name-only)
- For plain blobs, it shows the plain contents
git show [<options>] [<object>...]
--pretty[=<format>] # displays more or less information depending on the format chosen: oneline, short, medium, full, fuller, email, raw, string. default is medium
--abbrev-commit # shortens the length of output commit IDs. With --pretty=oneline can produce a highly succinct git log output
--no-abbrev-commit # Show the full 40 character commit ID
--oneline # uses the expanded command --pretty=oneline --abbrev-commit
--encoding[=<encoding>] # Character encoding on Git log messages defaults to UTF-8
--expand-tabs=<n> # replace tab characters with <n> spaces in the log message output
--expand-tabs # replace tab characters with 8 default spaces
--no-expand-tabs # replace tab characters with 0 space
--notes=<ref> # filters notes with <ref>
--no-notes # hide notes in output
--show-signature # show signature if commit are signed with gpg key
SHOW-BRANCH
Show branches and their commits
Shows the commit ancestry graph starting from the commits named with <rev>s or <glob>s (or all refs under refs/heads and/or refs/tags) semi-visually
git show-branch [-a|--all] [-r|--remotes] [--topo-order | --date-order] [--current] [--color[=<when>] | --no-color] [--sparse] [--more=<n> | --list | --independent | --merge-base] [--no-name | --sha1-name] [--topics] [(<rev> | <glob>)...]
git show-branch (-g|--reflog)[=<n>[,<base>]] [--list] [<ref>]
<rev> # Arbitrary extended SHA-1 expression (see gitrevisions(7)) that typically names a branch head or a tag
<glob> # A glob pattern that matches branch or tag names under refs/. For example, if you have many topic branches under refs/heads/topic, giving topic/* would show all of them
-r, --remotes # Show the remote-tracking branches
-a, --all # Show both remote-tracking branches and local branches
--current # With this option, the command includes the current branch to the list of revs to be shown when it is not given on the command line
--topo-order # By default, the branches and their commits are shown in reverse chronological order. This option makes them appear in topological order (i.e., descendant commits are shown before their parents)
--date-order # This option is similar to --topo-order in the sense that no parent comes before all of its children, but otherwise commits are ordered according to their commit date
--sparse # By default, the output omits merges that are reachable from only one tip being shown. This option makes them visible
--more=<n> # Usually the command stops output upon showing the commit that is the common ancestor of all the branches. This flag tells the command to go <n> more common commits beyond that
--list # Synonym to --more=-1
--merge-base # Instead of showing the commit list, determine possible merge bases for the specified commits
--independent # Among the <reference>s given, display only the ones that cannot be reached from any other <reference>
--no-name # Do not show naming strings for each commit
--sha1-name # Instead of naming the commits using the path to reach them from heads (e.g. "master~2" to mean the grandparent of "master"), name them with the unique prefix of their object names
--topics # Shows only commits that are NOT on the first branch given. This helps track topic branches by hiding any commit that is already in the main line of development
-g, --reflog[=<n>[,<base>]] [<ref>] # Shows <n> most recent ref-log entries for the given ref. If <base> is given, <n> entries going back from that entry. <base> can be specified as count or date. When no explicit <ref> parameter is given, it defaults to the current branch (or HEAD if it is detached)
--color[=<when>] # Color the status sign (one of these: * ! + -) of each commit corresponding to the branch it’s in
--no-color # Turn off colored output, even when the configuration file gives the default to color output
Show the working tree status
Displays paths that have differences between the index file and the current HEAD commit, paths that have differences between the working tree and the index file, and paths in the working tree that are not tracked by Git
git status [<options>...] [--] [<pathspec>...]
-s, --short # Give the output in the short-format
-b, --branch # Show the branch and tracking info even in short-format
--show-stash # Show the number of entries currently stashed away
--porcelain[=<version>] # Give the output in an easy-to-parse format for scripts. This is similar to the short output, but will remain stable across Git versions and regardless of user configuration
--long # Give the output in the long-format. This is the default
-v, --verbose # In addition to the names of files that have been changed, also show the textual changes that are staged to be committed
-u[<mode>], --untracked-files[=<mode>] # Show untracked files. The possible options are:
no # Show no untracked files
normal # Shows untracked files and directories
all # Also shows individual files in untracked directories
--ignore-submodules[=<when>] # Ignore changes to submodules when looking for changes. <when> can be either "none", "untracked", "dirty" or "all", which is the default
--ignored[=<mode>] # Show ignored files as well. defaults to traditional. The possible options are:
traditional # Shows ignored files and directories, unless --untracked-files=all is specified, in which case individual files in ignored directories are displayed
no # Show no ignored files
matching # Shows ignored files and directories matching an ignore pattern
-z # Terminate entries with NUL, instead of LF. This implies the --porcelain=v1 output format if no other format is given
--column[=<options>], --no-column # Display untracked files in columns. See configuration variable column.status for option syntax.--column and --no-column without options are equivalent to always and never respectively
--ahead-behind, --no-ahead-behind # Display or do not display detailed ahead/behind counts for the branch relative to its upstream branch. Defaults to true
--renames, --no-renames # Turn on/off rename detection regardless of user configuration. See also git-diff(1) --no-renames
--find-renames[=<n>] # Turn on rename detection, optionally setting the similarity threshold. See also git-diff(1) --find-renames
<pathspec>... # See the pathspec entry in gitglossary(7)
Clone a repository into a new directory
Clones a repository into a newly created directory, creates remote-tracking branches for each branch in the cloned repository (visible using git branch --remotes), and creates and checks out an initial branch that is forked from the cloned repository’s currently active branch
The following syntaxes may be used with them:
- ssh://[user@]host.xz[:port]/path/to/repo.git/
- git://host.xz[:port]/path/to/repo.git/
- http[s]://host.xz[:port]/path/to/repo.git/
- ftp[s]://host.xz[:port]/path/to/repo.git/
The ssh and git protocols additionally support ~username expansion:
- ssh://[user@]host.xz[:port]/~[user]/path/to/repo.git/
- git://host.xz[:port]/~[user]/path/to/repo.git/
- [user@]host.xz:/~[user]/path/to/repo.git/
For local repositories, the following syntaxes may be used:
- /path/to/repo.git/
- file:///path/to/repo.git/
git clone [--template=<template_directory>] [-l] [-s] [--no-hardlinks] [-q] [-n] [--bare] [--mirror] [-o <name>] [-b <name>] [-u <upload-pack>] [--reference <repository>] [--dissociate] [--separate-git-dir <git dir>] [--depth <depth>] [--[no-]single-branch] [--no-tags] [--recurse-submodules[=<pathspec>]] [--[no-]shallow-submodules] [--[no-]remote-submodules] [--jobs <n>] [--sparse] [--] <repository> [<directory>]
-l, --local # When the repository to clone from is on a local machine, this flag bypasses the normal "Git aware" transport mechanism and clones the repository by making a copy of HEAD and everything under objects and refs directories
--no-hardlinks # Force the cloning process from a repository on a local filesystem to copy the files under the .git/objects directory instead of using hardlinks. This may be desirable if you are trying to make a back-up of your repository
-s, --shared # When the repository to clone is on the local machine, instead of using hard links, automatically setup .git/objects/info/alternates to share the objects with the source repository
--reference[-if-able] <repository> # If the reference repository is on the local machine, automatically setup .git/objects/info/alternates to obtain objects from the reference repository
--dissociate # Borrow the objects from reference repositories specified with the --reference options only to reduce network transfer, and stop borrowing from them after a clone is made by making necessary local copies of borrowed objects
-q, --quiet # Operate quietly. Progress is not reported to the standard error stream
-v, --verbose # Run verbosely. Does not affect the reporting of progress status to the standard error stream
--progress # Progress status is reported on the standard error stream by default when it is attached to a terminal, unless --quiet is specified. This flag forces progress status even if the standard error stream is not directed to a terminal
--server-option=<option> # Transmit the given string to the server when communicating using protocol version 2. The given string must not contain a NUL or LF character
-n, --no-checkout # No checkout of HEAD is performed after the clone is complete
--bare # Make a bare Git repository. That is, instead of creating <directory> and placing the administrative files in <directory>/.git, make the <directory> itself the $GIT_DIR
--sparse # Initialize the sparse-checkout file so the working directory starts with only the files in the root of the repository. The sparse-checkout file can be modified to grow the working directory as needed
--mirror # Set up a mirror of the source repository
-o <name>, --origin <name> # Instead of using the remote name origin to keep track of the upstream repository, use <name>
-b <name>, --branch <name> # Instead of pointing the newly created HEAD to the branch pointed to by the cloned repository’s HEAD, point to <name> branch instead
-u <upload-pack>, --upload-pack <upload-pack> # When given, and the repository to clone from is accessed via ssh, this specifies a non-default path for the command run on the other end
--template=<template_directory> # Specify the directory from which templates will be used; (See the "TEMPLATE DIRECTORY" section of git-init(1).)
-c <key>=<value>, --config <key>=<value> # Set a configuration variable in the newly-created repository; this takes effect immediately after the repository is initialized, but before the remote history is fetched or any files checked out
--depth <depth> # Create a shallow clone with a history truncated to the specified number of commits
--shallow-since=<date> # Create a shallow clone with a history after the specified time
--shallow-exclude=<revision> # Create a shallow clone with a history, excluding commits reachable from a specified remote branch or tag. This option can be specified multiple times
--[no-]single-branch # Clone only the history leading to the tip of a single branch, either specified by the --branch option or the primary branch remote’s HEAD points at. Further fetches into the resulting repository will only update the remote-tracking branch for the branch this option was used for the initial cloning. If the HEAD at the remote did not point at any branch when --single-branch clone was made, no remote-tracking branch is created
--no-tags # Don’t clone any tags, and set remote.<remote>.tagOpt=--no-tags in the config, ensuring that future git pull and git fetch operations won’t follow any tags
--recurse-submodules[=<pathspec] # After the clone is created, initialize and clone submodules within based on the provided pathspec
--[no-]shallow-submodules # All submodules which are cloned will be shallow with a depth of 1
--[no-]remote-submodules # All submodules which are cloned will use the status of the submodule’s remote-tracking branch to update the submodule, rather than the superproject’s recorded SHA-1. Equivalent to passing --remote to git submodule update
--separate-git-dir=<git dir> # Instead of placing the cloned repository where it is supposed to be, place the cloned repository at the specified directory, then make a filesystem-agnostic Git symbolic link to there
-j <n>, --jobs <n> # The number of submodules fetched at the same time. Defaults to the submodule.fetchJobs option
<repository> # The (possibly remote) repository to clone from. See the GIT URLS section below for more information on specifying repositories
<directory> # The name of a new directory to clone into. The "humanish" part of the source repository is used if no directory is explicitly given (repo for /path/to/repo.git and foo for host.xz:foo/.git)
USED
git://$urlrepo # clone a repository
git://$urlrepo <alias> # clone a repository & give it an alias
Create an empty Git repository or reinitialize an existing one
This command creates an empty Git repository - basically a .git directory with subdirectories for objects, refs/heads, refs/tags, and template files. An initial HEAD file that references the HEAD of the master branch is also created
git init [-q | --quiet] [--bare] [--template=<template_directory>] [--separate-git-dir <git dir>] [--shared[=<permissions>]] [directory]
-q, --quiet # Only print error and warning messages; all other output will be suppressed
--bare # Create a bare repository. If GIT_DIR environment is not set, it is set to the current working directory
--template=<template_directory> # Specify the directory from which templates will be used
--separate-git-dir=<git dir> # Instead of initializing the repository as a directory to either $GIT_DIR or ./.git/, create a text file there containing the path to the actual repository
--shared[=(false|true|umask|group|all|world|everybody|0xxx)] # Specify that the Git repository is to be shared amongst several users
Get and set repository or global options
You can query/set/replace/unset options with this command. The name is actually the section and the key separated by a dot, and the value will be escaped
git config [<file-option>] [--type=<type>] [--show-origin] [-z|--null] name [value [value_regex]]
git config [<file-option>] [--type=<type>] --add name value
git config [<file-option>] [--type=<type>] --replace-all name value [value_regex]
git config [<file-option>] [--type=<type>] [--show-origin] [-z|--null] --get name [value_regex]
git config [<file-option>] [--type=<type>] [--show-origin] [-z|--null] --get-all name [value_regex]
git config [<file-option>] [--type=<type>] [--show-origin] [-z|--null] [--name-only] --get-regexp name_regex [value_regex]
git config [<file-option>] [--type=<type>] [-z|--null] --get-urlmatch name URL
git config [<file-option>] --unset name [value_regex]
git config [<file-option>] --unset-all name [value_regex]
git config [<file-option>] --rename-section old_name new_name
git config [<file-option>] --remove-section name
git config [<file-option>] [--show-origin] [-z|--null] [--name-only] -l | --list
git config [<file-option>] --get-color name [default]
git config [<file-option>] --get-colorbool name [stdout-is-tty]
git config [<file-option>] -e | --edit
--local # write to the repository .git/config file
--global # write to global ~/.gitconfig
--system # write to system-wide $(prefix)/etc/gitconfig
-e, --edit # opens an editor to modify the specified config file
--replace-all # Default behavior is to replace at most one line. This replaces all lines matching the key
--add # adds a new line to the option without altering any existing values
--get key [value-regex] # print the value of the key where the key corresponds exactly and the value if necessary with the pattern
--get-all key [value-regex] # same as --get but return all values
--get-regexp key-regex [value-regex] # print all "key value" pairs whose key/value pair corresponds to its pattern
--get-urlmatch section[.var] URL # when given a two-part name section.key, the value for section.<url>.key whose <url> part matches the best to the given URL is returned (if no such key exists, the value for section.key is used as a fallback)
--worktree # similar to --local except that .git/config.worktree is read from or written to if extensions.worktreeConfig is present
-f config-file, --file config-file # use the given config file instead of the one specified by GIT_CONFIG
--blob blob # similar to --file but use the given blob instead of a file
--remove-section # remove the given section from the configuration file
--rename-section # rename the given section to a new name
--unset # remove the line matching the key from config file
--unset-all # remove all lines matching the key from config file
-l, --list # list all variables set in config file, along with their values
--type <type> # ensure that any input or output is valid under the given type constraint(s)
bool / int / bool-or-int / path / expiry-date / color
-z, --null # output values and/or keys, always end values with the null character (instead of a newline)
--name-only # output only the names of config variables for --list or --get-regexp
--show-origin # Augment the output of all queried config options with the origin type and the actual origin
USED
git config
--system core.editor vim # set the default editor
--system merge.tool meld # set the default editor/viewer for diff
--global user.name "aguy tech" # set the name of user
--global user.email "aguytech@free.fr" # set the email of user
-l # print all configurations
-l --system # print system configurations
-l user.email # print values of configuration for a name
--get-all core.editor # print all defined values (local, global, system) for the key 'core.editor'
PACKAGE
package states
not-installed # The package is not installed on your system
config-files # Only the configuration files of the package exist on the system
half-installed # The installation of the package has been started, but not completed for some reason
unpacked # The package is unpacked, but not configured
half-configured # The package is unpacked and configuration has been started, but not yet completed for some reason
triggers-awaited # The package awaits trigger processing by another package
triggers-pending # The package has been triggered
installed # The package is correctly unpacked and configured
package selection states
install # The package is selected for installation
hold # A package marked to be on hold is not handled by dpkg, unless forced to do that with option --force-hold
deinstall # The package is selected for deinstallation (i.e. we want to remove all files, except configuration files)
purge # The package is selected to be purged (i.e. we want to remove everything from system directories, even configuration files)
package flags
ok # A package marked ok is in a known state, but might need further processing
reinstreq # A package marked reinstreq is broken and requires reinstallation. These packages cannot be removed, unless forced with option --force-remove-reinstreq
ACTIONS
-i, --install package-file... # Install the package. If --recursive or -R option is specified, package-file must refer to a directory instead
--unpack package-file... # Unpack the package, but don't configure it. If --recursive or -R option is specified, package-file must refer to a directory instead
--configure package...|-a|--pending # Configure a package which has been unpacked but not yet configured. If -a or --pending is given instead of package, all unpacked but unconfigured packages are configured
--triggers-only package...|-a|--pending # Processes only triggers
-r, --remove package...|-a|--pending # Remove an installed package
-V, --verify [package-name...] # Verifies the integrity of package-name or all packages if omitted, by comparing information from the files installed by a package with the files metadata information stored in the dpkg database
-C, --audit [package-name...] # Performs database sanity and consistency checks for package-name or all packages if omitted (per package checks
--update-avail [Packages-file] # Update dpkg's & old information is replaced with the information in the Packages-file
--merge-avail [Packages-file] # Update dpkg's & old information is combined with information from Packages-file
-A, --record-avail package-file... # Update dpkg and dselect's idea of which packages are available with information from the package package-file
--clear-avail # Erase the existing information about what packages are available
--get-selections [package-name-pattern...] # Get list of package selections, and write it to stdout
--set-selections # Set package selections using file read from stdin
--clear-selections # Set the requested state of every non-essential package to deinstall
--yet-to-unpack # Searches for packages selected for installation, but which for some reason still haven't been installed
--predep-package # Print a single package which is the target of one or more relevant pre-dependencies and has itself no unsatisfied pre-dependencies
--add-architecture architecture # Add architecture to the list of architectures for which packages can be installed without using --force-architecture
--remove-architecture architecture # Remove architecture from the list of architectures for which packages can be installed without using --force-architecture
--print-architecture # Print architecture of packages dpkg installs
--print-foreign-architectures # Print a newline-separated list of the extra architectures dpkg is configured to allow packages to be installed for
--assert-feature # Asserts that dpkg supports the requested feature. assertable features is:
support-predepends # Supports the Pre-Depends field
working-epoch # Supports epochs in version strings
long-filenames # Supports long filenames in deb(5) archives
multi-conrep # Supports multiple Conflicts and Replaces
multi-arch # Supports multi-arch fields and semantics
versioned-provides # Supports versioned Provides
--validate-thing string # Validate that the thing string has a correct syntax. validatable things is:
pkgname # Validates the given package name
trigname # Validates the given trigger name
archname # Validates the given architecture name
version # Validates the given version
--compare-versions ver1 op ver2 # Compare version numbers, where op is a binary operator. dpkg returns true (0) if the specified condition is satisfied, and false (1) otherwise
-?, --help # Display a brief help message
--force-help # Give help about the --force-thing options
-Dh, --debug=help # Give help about debugging options
--version # Display dpkg version information
dpkg-deb actions # See dpkg-deb(1) for more information about the following actions
-b, --build directory [archive|directory] # Build a deb package
-c, --contents archive # List contents of a deb package
-e, --control archive [directory] # Extract control-information from a package
-x, --extract archive directory # Extract the files contained by package
-X, --vextract archive directory # Extract and display the filenames contained by a
package
-f, --field archive [control-field...]Display control field(s) of a package
--ctrl-tarfile archiveOutput the control tar-file contained in a Debian package
--fsys-tarfile archiveOutput the filesystem tar-file contained by a Debian package
-I, --info archive [control-file...]Show information about a package
dpkg-query actions # See dpkg-query(1) for more information about the following actions
-l, --list package-name-pattern... # List packages matching given pattern
-s, --status package-name... # Report status of specified package
-L, --listfiles package-name... # List files installed to your system from package-name
-S, --search filename-search-pattern... # Search for a filename from installed packages
-p, --print-avail package-name... # Display details about package-name, as found in /var/lib/dpkg/available. Users of APT-based frontends should use apt-cache show package-name instead
OPTIONS
--abort-after=number # Change after how many errors dpkg will abort. The default is 50
-B, --auto-deconfigure # When a package is removed, there is a possibility that another installed package depended on the removed package
-Doctal, --debug=octal # Switch debugging on
--force-things
--no-force-things, --refuse-things # Force or refuse (no-force and refuse mean the same thing) to do some things
--ignore-depends=package,... # Ignore dependency-checking for specified packages
--no-act, --dry-run, --simulate # Do everything which is supposed to be done, but don't write any changes
-R, --recursive # Recursively handle all regular files matching pattern *.deb found at specified directories and all of its subdirectories
-G # Don't install a package if a newer version of the same package is already installed. This is an alias of --refuse-downgrade
--admindir=dir # Change default administrative directory, which contains many files that give information about status of installed or uninstalled packages, etc
--instdir=dir # Change default installation directory which refers to the directory where packages are to be installed
--root=dir # Changing root changes instdir to «dir» and admindir to «dir/var/lib/dpkg»
-O, --selected-only # Only process the packages that are selected for installation
-E, --skip-same-version # Don't install the package if the same version of the package is already installed
--pre-invoke=command
--post-invoke=command # Set an invoke hook command to be run via “sh -c” before or after the dpkg run for the unpack, configure, install, triggers-only, remove, purge, add-architecture and remove-architecture dpkg actions
--path-exclude=glob-pattern
--path-include=glob-pattern # Set glob-pattern as a path filter, either by excluding or re-including previously excluded paths matching the specified patterns during install
--verify-format format-name # Sets the output format for the --verify command
--status-fd n # Send machine-readable package status and progress information to file descriptor n
--status-logger=command # Send machine-readable package status and progress information to the shell command s standard input, to be run via “sh -c”
--log=filename # # Log status change updates and actions to filename, instead of the default /var/log/dpkg.log
--no-debsig # Do not try to verify package signatures
--no-triggers # Do not run any triggers in this run
--triggers # Cancels a previous --no-triggers
MAN
synopsis
sed [OPTION]... {script-only-if-no-other-script} [input-file]...
sed [-options] [commande] [] sed [-n [-e commande] [-f script] [-i[.extension]] [l [cesure]] rsu] [] []
[adresse[,adresse]][!]commande[arguments]
[adresse[,adresse]]{
commande1
commande2;commande3
}
options
-n, --quiet, --silent # suppress automatic printing of pattern space
-e script, --expression=script # add the script to the commands to be executed
-f script-file, --file=script-file # add the contents of script-file to the commands to be executed
--follow-symlinks # follow symlinks when processing in place
-i[SUFFIX], --in-place[=SUFFIX] # edit files in place (makes backup if SUFFIX supplied)
-l N, --line-length=N # specify the desired line-wrap length for the `l' command
--posix # disable all GNU extensions
-E, -r, --regexp-extended # use extended regular expressions in the script (for portability use POSIX -E)
-s, --separate # consider files as separate rather than as a single, continuous long stream
--sandbox # operate in sandbox mode
-u, --unbuffered # load minimal amounts of data from the input files and flush the output buffers more often
-z, --null-data # separate lines by NUL characters
--help # display this help and exit
--version # output version information and exit
Zero-address ``commands''
: label # Label for b and t commands
'#'comment # The comment extends until the next newline (or the end of a -e script fragment)
} # The closing bracket of a { } block
Zero- or One- address commands
= # Print the current line number
a \text # Append text, which has each embedded newline preceded by a backslash
i \text # Insert text, which has each embedded newline preceded by a backslash
c \text # Replace text, which has each embedded newline preceded by a backslash
q [exit-code] # Immediately quit the sed script without processing any more input, except that if auto-print is not disabled the current pattern space will be printed
Q [exit-code] # Immediately quit the sed script without processing any more input
r filename # Append text read from filename
R filename # Append a line read from filename. Each invocation of the command reads a line from the file. This is a GNU extension
Commands which accept address ranges
{ # Begin a block of commands (end with a })
b label # Branch to label; if label is omitted, branch to end of script
c \text # Replace the selected lines with text, which has each embedded newline preceded by a backslash
d # Delete pattern space. Start next cycle
D # If pattern space contains no newline, start a normal new cycle as if the d command was issued. Otherwise, delete text in the pattern space up to the first newline, and restart cycle with the resultant pattern space, without reading a new line of input
h H # Copy/append pattern space to hold space
g G # Copy/append hold space to pattern space
l # List out the current line in a ``visually unambiguous'' form
l width # List out the current line in a ``visually unambiguous'' form, breaking it at width characters. This is a GNU extension
n N # Read/append the next line of input into the pattern space
p # Print the current pattern space
P # Print up to the first embedded newline of the current pattern space
s/regexp/replacement/ # Attempt to match regexp against the pattern space. If successful, replace that portion matched with replacement. The replacement may contain the special character & to refer to that portion of the pattern space which matched, and the special escapes \1 through \9 to refer to the corresponding matching sub-expressions in the regexp
t label # If a s/// has done a successful substitution since the last input line was read and since the last t or T command, then branch to label; if label is omitted, branch to end of script
T label # If no s/// has done a successful substitution since the last input line was read and since the last t or T command, then branch to label; if label is omitted, branch to end of script. This is a GNU extension
w filename # Write the current pattern space to filename
W filename # Write the first line of the current pattern space to filename. This is a GNU extension
x # Exchange the contents of the hold and pattern spaces
y/source/dest/ # Transliterate the characters in the pattern space which appear in source to the corresponding character in dest
Addresses
Sed commands can be given with no addresses, in which case the command will be executed for all input lines; with one address, in which case the command will only be executed for input lines which match that address; or with two addresses, in which case the command will be executed for all input lines which match the inclusive range of lines starting from the first address and continuing to the second address. Three things to note about address ranges: the syntax is addr1,addr2 (i.e., the addresses are separated by a comma); the line which addr1 matched will always be accepted, even if addr2 selects an earlier line; and if addr2 is a regexp, it will not be tested against the line that addr1 matched
After the address (or address-range), and before the command, a ! may be inserted, which specifies that the command shall only be executed if the address (or address-range) does not match
The following address types are supported:
number # Match only the specified line number (which increments cumulatively across files, unless the -s option is specified on the command line)
first~step # Match every step'th line starting with line first. For example, ``sed -n 1~2p'' will print all the odd-numbered lines in the input stream, and the address 2~5 will match every fifth line, starting with the second. first can be zero; in this case, sed operates as if it were equal to step. (This is an extension.)
$ # Match the last line
/regexp/ # Match lines matching the regular expression regexp
\cregexpc # Match lines matching the regular expression regexp. The c may be any character
GNU sed also supports some special 2-address forms:
0,addr2 # Start out in "matched first address" state, until addr2 is found. This is similar to 1,addr2, except that if addr2 matches the very first line of input the 0,addr2 form will be at the end of its range, whereas the 1,addr2 form will still be at the beginning of its range. This works only when addr2 is a regular expression
addr1,+N # Will match addr1 and the N lines following addr1
addr1,~N # Will match addr1 and the lines following addr1 until the next line whose input line number is a multiple of N
https://doc.ubuntu-fr.org/gnupg
CONF
~/.gnupg/gpg.conf
# default key to use
default-key <key-uid>
LIST
--list-keys # list all keys
--list-keys --keyid-format {none|short|0xshort|long|0xlong} # list keys with a specified format
--list-secret-keys # list only secret keys
--list-public-keys # list only public keys
--search-keys <id>|<identifier>|<email> # search a key with id, identifier, email ...
KEY
--full-gen-key # generate a key (rsa, 4096, 2y)
--delete-keys <id> # delete keys with id
--gen-revoke <email> # generate a certificate of revocation. keep it safely !
--fingerprint <id> # verify the fingerprint of key
--sign-key <id> # sign a trsuted public key
SERVER
--send-key <id> [--keyserver <serveur>] # send key to specified server or the default one
--recv-keys <id> [--keyserver <serveur>] # receive key with id from server
FILE
--verify <file.sign> <file> # verify the file are the same hash than the signed reference
--clearsign <file> # sign a file
# the file must be encrypted with the recipient's public key
# --sign & --encrypt can be use together
gpg --encrypt <file> # encrypt file but decrypted stay in binary mode
gpg --armor --output "file.gpg" --encrypt "file" # encrypt file and decrypted are available for reading
gpg --output <file> --decrypt <file.gpg> # decrypt file
TOC
chapter |
---|
CONFIG |
COPY |
DNSMASQ |
LIST |
IMAGE |
INIT |
NETWORK |
PROFILE |
STORAGE |
EXEC |
PUBLISH |
TRICK |
CONFIG
set network configuration
echo -e "lxc.network.0.ipv4 = 10.100.0.10/24\nlxc.network.0.ipv4.gateway = 10.100.0.1\n" | lxc config set my-container raw.lxc -
COPY
Copy container with changing device key/value
lxc copy $ct $ctnew --device $devicename,$key=$value
# lxc copy srv-mail srv-mail-maria --device eth0,ipv4.address=10.0.0.10
DNSMASQ
# /path/to/host-to-ip-file.conf with following dnsmasq syntax
c1,10.100.0.10
c2,10.100.0.20
&
lxc network set lxdbr0 set raw.dnsmasq hostsfile=/path/to/host-to-ip-file.conf
lxc restart c1 c2
LIST
lxc list $str # be careful: list container names with a name contains $str
lxc list $regexp # filters containers list with name matches regexp
lxc list $property_name=$value # filter container list with properties matches values (use list --format json to see real properties name)
lxc list -c 46abcdlnNpPsSt L # show all container informations
lxc list -c n4s # outputs minimal container informations
lxc list image.os=Alpine image.architecture=amd64 # outputs informations for containers matched with properties: image.os=Alpine image.architecture=amd64
# list names of all containers
lxc list -f csv -c n
lxc list --format=json | jq -r '.[].name
# list names of running containers
lxc list status=Running -f csv -c n
lxc list --format=json | jq -r '.[] | select(.status == "Running").name
# list names of stopped containers
lxc list status=Stopped -f csv -c n
lxc list --format=json | jq -r '.[] | select(.status == "Stopped").name
# list image with finding any of aliases
lxc image list --format json | jq -r '.[].aliases[] | select(.name == "debian10").name'
IMAGE
lxc image list $repo:$os/$release
lxc image list $repo: architecture=$regexp description=$regexp os=$regexp release=$regexp serial=$regexp
lxc image list ubuntu: # list availables images from repository ubuntu
lxc image list images:debian # list availables images from repository images started debian"
lxc image list images:alpine/3.11/amd64 # list availables images from repository images matched with os:Alpine release:3.11 architecture:amd64
lxc image list images: architecture=amd64 os=Debian release=buster
lxc image copy images:debian/10/amd64 local: --alias debian10 --alias debian-10 --alias buster --auto-update
lxc image copy images:alpine/3.12 local: --alias alpine312 --alias alpine-312 --auto-update
lxc image copy ubuntu:18.04/amd64 local: --alias bionic --alias ubuntu1804 --alias ubuntu-1804 --auto-update
lxc image copy ubuntu:20.04/amd64 local: --alias focal --alias ubuntu2004 --alias ubuntu-2004 --auto-update
lxc image copy images:centos/8/amd64 local: --alias centos8 --alias centos-8 --auto-update
INIT
lxc init alpine311 alpine311 # Initialize image with name
NETWORK
lxc network set $inet_name ipv4.address $address_cidr # set network address for interface $inet
lxc network create $inet_name ipv4.address=10.0.1.1/24 # add network interface
lxc network attach-profile $inet_name stock eth0
PROFILE
lxc profile device add prod root disk pool=prod path=/ # add root path to profile from a pool 'prod'
lxc profile device add $profile_name $device_name disk pool=$pool_name path=$path_ct # add $device_name to $profile_name for $path_ct
lxc profile device add $profile_name $device_name disk source=$path_host path=$path_ct
STORAGE
lxc storage create default zfs size=50GB # create a pool using a loop device named 'default' with zfs driver & a size of 50G
lxc storage create stock zfs source=stock # create a pool 'stock' using a zfs pool named 'stock'
EXEC
echo -e "auth-zone=lxd\ndns-loop-detect" | lxc network set lxdbr0 raw.dnsmasq -
PUBLISH
lxc publish $CTNAME --alias $CTNAME-$HOSTNAME # default compression are gzip
lxc publish $CTNAME --alias $CTNAME-$HOSTNAME --compression xz # good compression but long
TRICK
pretty print
json
lxc list --format=json $ctname$ # '$' is to limit to exact name & not only started with ctname
# pretty print
lxc list --format=json $ctname$ | jq
lxc list --format=json $ctname$ | python -m json.tool
yaml
lxc list --format=yaml $ctname$ # '$' is to limit to exact name & not only started with ctname
# pretty print
lxc list --format=yaml $ctname$ |yq r - -C
profile
yaml
Print name of host interfaces attached to the profile $profile
lxc profile show $profile | yq r - "devices.(type==nic).parent"
Print name of containers that use the profile $profile
lxc profile show $profile | yq r - 'used_by' | sed 's|^.*/||'
LXC
# permit to a container to run in lxc container without 'lxc.apparmor.profile: unconfined' or
security.nesting = true
TOC
chapter |
---|
OPTIONS |
ADD |
DEL |
System maintenance |
FIX |
UPDATE |
UPGRADE |
CACHE |
Querying information about packages |
INFO |
LIST |
SEARCH |
STATS |
DOT |
POLICY |
Repository maintenance |
INDEX |
FETCH |
VERIFY |
MANIFEST |
RC |
RC-STATUS |
RC-UPDATE |
RC-SERVICE |
OPTIONS
Global options
-h, --help # Show generic help or applet specific help
-p, --root DIR # Install packages to DIR
-X, --repository REPO # Use packages from REPO
-q, --quiet # Print less information
-v, --verbose # Print more information (can be doubled)
-i, --interactive # Ask confirmation for certain operations
-V, --version # Print program version and exit
-f, --force # Enable selected --force-* (deprecated)
--force-binary-stdout # Continue even if binary data is to be output
--force-broken-world # Continue even if 'world' cannot be satisfied
--force-non-repository # Continue even if packages may be lost on reboot
--force-old-apk # Continue even if packages use unsupported features
--force-overwrite # Overwrite files in other packages
--force-refresh # Do not use cached files (local or from proxy)
-U, --update-cache # Alias for --cache-max-age 1
--progress # Show a progress bar
--progress-fd FD # Write progress to fd
--no-progress # Disable progress bar even for TTYs
--purge # Delete also modified configuration files (pkg removal) and uninstalled packages from cache (cache clean)
--allow-untrusted # Install packages with untrusted signature or no signature
--wait TIME # Wait for TIME seconds to get an exclusive repository lock before failing
--keys-dir KEYSDIR # Override directory of trusted keys
--repositories-file # REPOFILE Override repositories file
--no-network # Do not use network (cache is still used)
--no-cache # Do not use any local cache path
--cache-dir CACHEDIR # Override cache directory
--cache-max-age AGE # Maximum AGE (in minutes) for index in cache before refresh
--arch ARCH # Use architecture with --root
--print-arch # Print default arch and exit
Commit options
-s, --simulate # Show what would be done without actually doing it
--clean-protected # Do not create .apk-new files in configuration dirs
--overlay-from-stdin # Read list of overlay files from stdin
--no-scripts # Do not execute any scripts
--no-commit-hooks # Skip pre/post hook scripts (but not other scripts)
--initramfs-diskless-boot # Enables options for diskless initramfs boot
ADD
Add PACKAGEs to 'world' and install (or upgrade) them, while ensuring that all dependencies are met
apk add [OPTIONS...] PACKAGE...
--initdb # Initialize database
-u, --upgrade # Prefer to upgrade package
-l, --latest # Select latest version of package (if it is not pinned), and print error if it cannot be installed due to other dependencies
-t, --virtual NAME # Instead of adding all the packages to 'world', create a new virtual package with the listed dependencies and add that to 'world'; the actions of the command are easily reverted by deleting the virtual package
examples
apk add $pkg=$version # install and fix package $pkg in version $version. ex: apk add bash=5.0.0-r0
apk add $pkg=~$major_version # install and fix package $pkg in major version $major_version. ex: apk add bash=~5.0
apk add $pkg=$version # install version $version of package $pkg & remove version holding. ex: apk add bash>5.0.0-r0
apk add --allow-untrusted $path/$pkg.apk # install untrusted package from file
DEL
Remove PACKAGEs from 'world' and uninstall them
apk del [OPTIONS...] PACKAGE...
-r, --rdepends # Recursively delete all top-level reverse dependencies too
FIX
Repair package or upgrade it without modifying main dependencies
apk fix [OPTIONS...] PACKAGE...
-d, --depends # Fix all dependencies too
-r, --reinstall # Reinstall the package (default)
-u, --upgrade # Prefer to upgrade package
-x, --xattr # Fix packages with broken xattrs
--directory-permissions # Reset all directory permissions
UPDATE
Update repository indexes from all remote repositories
apk update
UPGRADE
Upgrade currently installed packages to match repositories
apk fix [OPTIONS...]
-a, --available # Resets versioned world dependencies, and changes to prefer replacing or downgrading packages (instead of holding them) if the currently installed package is no longer available from any repository
-l, --latest # Select latest version of package (if it is not pinned), and print error if it cannot be installed due to other dependencies
--no-self-upgrade # Do not do early upgrade of 'apk-tools' package
--self-upgrade-only # Only do self-upgrade
CACHE
Download missing PACKAGEs to cache and/or delete unneeded files from cache
apk cache [OPTIONS...] sync | clean | download
-u, --upgrade # Prefer to upgrade package
-l, --latest # Select latest version of package (if it is not pinned), and print error if it cannot be installed due to other dependencies
examples
apk cache -v sync # clean cache & download missing packages
INFO
Give detailed information about PACKAGEs or repositories
apk info [OPTIONS...] PACKAGE...
-L, --contents # List contents of the PACKAGE
-e, --installed # Check if PACKAGE is installed
-W, --who-owns # Print the package owning the specified file
-R, --depends # List packages that the PACKAGE depends on
-P, --provides # List virtual packages provided by PACKAGE
-r, --rdepends # List all packages depending on PACKAGE
--replaces # List packages whom files PACKAGE might replace
-i, --install-if # List the PACKAGE's install_if rule
-I, --rinstall-if # List all packages having install_if referencing PACKAGE
-w, --webpage # Show URL for more information about PACKAGE
-s, --size # Show installed size of PACKAGE
-d, --description # Print description for PACKAGE
--license # Print license for PACKAGE
-t, --triggers # Print active triggers of PACKAGE
-a, --all # Print all information about PACKAGE
LIST
List packages by PATTERN and other criteria
apk list [OPTIONS...] PATTERN
-I, --installed # List installed packages only
-O, --orphaned # List orphaned packages only
-a, --available # List available packages only
-u, --upgradable # List upgradable packages only
-o, --origin # List packages by origin
-d, --depends # List packages by dependency
-P, --providers # List packages by provider
SEARCH
Search package by PATTERNs or by indexed dependencies
apk search [OPTIONS...] PATTERN
-a, --all # Show all package versions (instead of latest only)
-d, --description # Search package descriptions (implies -a)
-x, --exact # Require exact match (instead of substring match)
-e # Synonym for -x (deprecated)
-o, --origin # Print origin package name instead of the subpackage
-r, --rdepends # Print reverse dependencies of package
--has-origin # List packages that have the given origin
STATS
Show statistics about repositories and installations
apk stats
examples
apk stats # Show statistics about repositories and installations
DOT
Generate graphviz graphs
apk dot [OPTIONS...] PKGMASK...
--errors # Output only parts of the graph which are considered erroneous: e.g. cycles and missing packages
--installed # Consider only installed packages
POLICY
Show repository policy for packages
apk policy
INDEX
Create repository index file from FILEs
apk index [OPTIONS...] FILE...
-o, --output FILE # Write the generated index to FILE
-x, --index INDEX # Read INDEX to speed up new index creation by reusing the information from an old index
-d, --description TEXT # Embed TEXT as description and version information of the repository index
--rewrite-arch ARCH # Use ARCH as architecture for all packages
FETCH
Download PACKAGEs from global repositories to a local directory
apk fetch [OPTIONS...] PACKAGE...
-L, --link # Create hard links if possible
-R, --recursive # Fetch the PACKAGE and all its dependencies
--simulate # Show what would be done without actually doing it
-s, --stdout # Dump the .apk to stdout (incompatible with -o, -R, --progress)
-o, --output DIR # Directory to place the PACKAGEs to
VERIFY
Verify package integrity and signature
apk verify FILE...
MANIFEST
Show checksums of package contents
apk manifest PACKAGE...
RC-STATUS
run-level
- boot – Generally the only services you should add to the boot runlevel are those which deal with the mounting of filesystems, set the initial state of attached peripherals and logging. Hotplugged services are added to the boot runlevel by the system. All services in the boot and sysinit runlevels are automatically included in all other runlevels except for those listed here
- single – Stops all services except for those in the sysinit runlevel
- reboot – Changes to the shutdown runlevel and then reboots the host
- shutdown – Changes to the shutdown runlevel and then halts the host
- default – Used if no runlevel is specified. (This is generally the runlevel you want to add services to.)
rc-status
rc-status # show services attached to actual run-level
rc-status boot # show services attached to run-level 'boot'
-l --list # Show list of run levels
-a --all # Show services from all run levels
-s --servicelist # Show service list
-m, --manual # show manually started services
-u, --unused # show services not assigned to any runlevel
-c, --crashed # show crashed services
-S, --supervised # show supervised services
RC-UPDATE
rc-update
rc-update show {run-level} # show services attached to a run-level
rc-update add {service} {run-level} # attach service to a run-level
rc-update del {service} {run-level} # dettach service to a run-level
rc
rc $runlevel # change run-level to $runlevel
RC-SERVICE
rc-service
rc-service {service} start # start service
rc-service {service} stop # stop service
rc-service {service} restart # restart service
-e, --exists <arg> # tests if the service exists or not
-c, --ifcrashed # if the service is crashed run the command
-i, --ifexists # if the service exists run the command
-I, --ifinactive # if the service is inactive run the command
-N, --ifnotstarted # if the service is not started run the command
-s, --ifstarted # if the service is started run the command
-S, --ifstopped # if the service is stopped run the command
-l, --list # list all available services
-r, --resolve <arg> # resolve the service name to an init script
-Z, --dry-run # dry run (show what would happen)
-q, --quiet # run quietly (repeat to suppress errors)
yt-dlp $link # download $link with better video & audio format
-F # list all formats available for $link
-f $format # download $link with the specified formats (audio or video, separate files)
-f $videoformat+$audioformat --merge-output-format # download $link with the specified formats & merge it to a file. One of mkv, mp4,ogg, webm, flv
-o $file # specify the output file name
--audio-format $format # specify audio format: "best", "aac", "flac", "mp3", "m4a", "opus", "vorbis", or "wav"; "best" by default; No effect without -x
FORMAT
(mp4,webm)[height<480] # use first mp4, second webm with a maximum height of 480
examples
yt-dlp $link -f 'webm[height<800]+bestaudio' --merge-output-format webm # merge specified video + audio formats
yt-dlp $link -f '243+251' --merge-output-format webm # merge specified video + audio formats
manual rip the videos list
After copying right div with inspector from youtube.com in file videos_list.html
videos_file="videos_list.html"
videos_format="244+250"
videos_id=`sed -n "s|.*watch?v=\([^&]\+\)&.*|\1|p" "${videos_file}" | uniq | xargs`
for id in ${videos_id}; do echo yt-dlp "https://www.youtube.com/watch?v=${id}" -f "${videos_format}"; done
create batch from youtube list
grep 'watch?v=' ~/Downloads/yt_ls |sed "s|.*watch?v=\([^&]\+\)&.*|\1|"| sort -u > yt_id
format="247+251"
yt-dlp $(head -n1 yt_id) -F
is=$(wc -l < yt_id); i=0; while read id; do i=$((i+1)); echo "----- $i / $is"; yt-dlp $id -f ${format} || echo $id >> yt_err; done < yt_id
https://github.com/junegunn/vim-plug
TOC
chapter | designation |
---|---|
NORMAL MODE | For navigation and manipulation of text. This is the mode that vim will usually start in |
COMMAND MODE | For executing extra commands like the help, shell, ... |
INSERT/EDIT MODE | Press i to enter insert/edit/mode & q or <esc> to quit |
3 main modes
Normal mode
For navigation and manipulation of text. This is the mode that vim will usually start in
Command mode
For executing extra commands like the help, shell, ...
Insert (edit) mode
For inserting new text, where you type into your file like other editors.
NORMAL MODE
MOTIONS
move around the text (file) by
gg
ctrl+b
ctrl+u
-
k
^
0 / ^ / B / b / h < > l / e / E / w / W / $
v
+
j
ctrl+d
ctrl+f
G
arrow keys
k
^
h < >l
j
word
w # next word
W # next WORD
b # previous word
B # previous WORD
e # end of word
E # end of WORD
w / W # word / WORD
[, ] # block
(, ) # block
<, > # block
", ' #" in double quote or quote
t # XML/HTML tag
s # sentence
line
0 # begin of line (column 0)
^ # begin of line (non-blank)
$ # end of line
\- = k^ # start of previous line
\+ = j^ # start of next line
file
gg / G # go to begin / end of file
[num]gg / [num]G / :num<CR> # go to line num
gd # go to definition of current word
gf # go to the file (under the cursor)
EDITING
syntax
set ft=prolog # set the file type to prolog
copy/paste:
yy # yank/copy current line
p # paste to the next line
P # paste above current line
commands
. # repeat last command
~ # swap case
x # delete char current char
r # replace char under the cursor
J # merge with the next line
dd # delete current line
D / C # delete line from cursor to the EOL
u # undo
ctrl+r # redo
visual mode
v # into visual/select mode
V # into visual/select mode by line
ctrl+v # into visual/select mode by block
alignment
== # auto indent
>> # shift right, increase indent
<< # shift left, decrease indent
examples
di) # delete the text inside current paranthesis
ci" #" change the text inside ""
gUiw # make the word under the cursor to upper case
registers
"[char][y|d] #" yank/delete into register
"[char][p|P] #" paste from register
:echo @[char] # shows register content
:reg[isters] # shows all registers
macro
q[char] # start recording into register char
q # stop recording macro
@[char] # play macro in register char
@@ # repeat last playback
code folding
zi # toggles folding on or off
za # toggles current fold open or close
zc # close current fold
zC # close current fold recursively
zo # open current fold
zO # open current fold recursively
zR # open all folds
zM # close all folds
zv # expand folds to reveal the cursor
zk / zj # move to previous / next fold
WINDOW
move inside
H # top of window
M # middle of window
L # low (bottom) of window
zt # scroll to top
zz # scroll to middle
zb # scroll to bottom
ctrl+b / ctrl+f # previous / next page
ctrl+u / ctrl+d # previous / next half page
split
ctrl+w s = :sp[lit] # split current window horizontally
ctrl+w v = :vs[plit] # split current window vertically
ctrl+w c = :cl[ose] # close current window
ctrl+w o = :on[ly] # close all windows except current one
ctrl+w ctrl+w # switch to next split window
ctrl+w ctrl+p # switch to previous split window
ctrl+w hjkl # switch (move cursor) to left, below, above or right split
ctrl+w HJKL # move current window to left, below, above or right
ctrl+w r # rotate window clockwise
ctrl+w = # make all windows equal in size
[num]ctrl+w +- # increase/decrease current window height
[num]ctrl+w <> # increase/decrease current window width
ctrl+w _ # minimize current window
ctrl+w T # move current window to new tab
JUMPS & MARKS
ctrl+o # jump/switch back in the buffer history
ctrl+i # jump/switch forward in the buffer history
ctrl+6 # jump/switch to the buffer you just left
ctrl+] # jump/switch to tag under cursor (if ./tags is available)
' ' # jump/switch back to last jump
'. #' jump/switch to last edited line
} # next paragraph
{ # previous paragraph
% # switch matching (), {} or []
m[char] / '[char] # mark by / jump to [char]
m[CHAR] / '[CHAR] # mark by / jump to [CHAR] across the files.
SPELL CHECKING
]s # jump to next spelling error
[s # jump to previous spelling error
z= # suggest corrections for current word
zg # add current word to the dictionary
zw # remove current word from dictionary
SEARCHING
word
- # find next current word
'#' # find previous current word
/[pattern] # search forward by matching pattern
?[pattern] # search backward by matching pattern
n # next result
N # previous result
[I # show lines with matching word under cursor
character
f[char] # find next exact character in the line
F[char] # find previous exact character in the line
t[char] # find next exact character in the word
T[char] # find previous exact character in the line
; next repeat for f/t/F/T action
, previous repeat for f/t/F/T action
COMMAND MODE
editing the text without transition to Insert Mode:
@: # repeat last command-line change (command invoked with ":", for example :s/old/new/).
windows and splits
:sp[lit] = ctrl+w s # split current window horizontally
:vs[plit] = ctrl+w v # split current window vertically
:cl[ose] = ctrl+w c # close current window
:on[ly] = ctrl+w o # close all windows except current one
lists
:jumps # shows the jump list
:changes # shows the change list
:reg[isters] # shows the registers
:marks # shows the marks
:delm[arks] {marks} # delete specified mark(s)
delm a b 1 \" # deletes a, b, 1 and "
delm a-h # deletes all marks from a to h
:delm[marks]! # deletes all lowercase marks
file and buffers
:w[rite] # write current file
:q # close/quit current file, split or tab
:wq = ZZ # write current file and quit
:q! = ZQ # quit without writing the changes
:qa # quit all splits
:ls # list all open files/buffers
:f[ile] = ctrl+g # shows current file path
:e[dit] # open a file for editing
:ene[w] # open a blank new file for editing
:b<n> # jump to buffer n returned from :ls
:b<file> # jump to buffer file, Tab to scroll through available options
:bn[ext] # jump to next buffer
:bp[rev] # jump to previous buffer
:bd[elete] # remove file from buffer list
shell
:mak[e] # run make in current directory
:cw # toggle mini window for errors
:! # executes external shell command
:r[ead] # read external program output into current file
tabs
ctrl+w gf # open file under the cursor into new tab
:tabs # list current tabs and windows
:tabn = <ctrl+PageDown> # next tab
:tabn <n> # goto tab n
:tabp = tabN = <ctrl+PageUp> # previous tab
:tabe [file] # create a new blank tab or opens file in that tab
OPERATORS
operator are generally constructed as:
[operator][count][motion]
[operator]i[motion]
operators:
c # change command ...
d # delete ...
y # yank (copy) ...
g~ # swap case ...
gu # to lower case ...
gU # to upper case ...
HELP
:h cmd # normal mode command help
:h :cmd # command line help for cmd
:h i_cmd # insert mode command help
:h v_cmd # visual mode command help
:h c_cmd # command line editing cmd help
:h 'option' # help of option
:helpg[rep] # search through help docs!
special help sections
:h motions
:h word-motions
:h jump-motions
:h mark-motions
:h operators
:h buffres
:h windows
:h tabs
:h registers
:h pattern-searches
OPTIONS
:set <opt>? # shows current option value
:set no<opt> # turn off flag opt
:set opt # turn on flag opt
:set opt=val # override value of opt
:set opt+=val # append val to opt
:echo &opt # shows value of opt
essential options
hidden or hid # when off, a buffer is unloaded when it's abandoned.
laststatus or ls # shows status line # 0 (never), 1 (only if at least two windows), 3 (always)
hisearch or his # highlight search matches
number or nu # shows line number
showcmd or sc # shows command as you type them (may not be available on your compilation)
ruler or ru # shows line and column number of the cursor
wrap # controls line wrapping
ignorecase or ic # ignores case for search patterns
smartindent or si # flag for smart indenting
foldmethod or fdm # fold method
spell / nospell # turn spell checking enable or disable.
SUBSTITUTE
:s/search/replace/ # basic substitution on a line
:%s/search/replace/ # run substitution on every line
:%s/search/replace/g # g flag means apply to every match
:%s/search/replace/c # c flag means ask for confirmation
tags / ctags
by executing $> ctags -r under project tree:
:tag <name>TAB # goes to tag name
ctrl+] # goes to the tag under cursor
INSERT/EDIT MODE
insert
i # insert at left of cursor
a # insert at right of cursor
I # insert at the line beginning (non-blank)
A # insert at end of line
o # insert by adding new line below the cursor
O # insert by insert new line above the cursor
s # substitute at cursor and enter insert mode
S = ^DA = ddO # delete current line and enter insert mode
C = c$ # change line from cursor to EOL
mode change
Esc = ctrl+c = ctrl+[ # exit insert mode
auto complete
ctrl+p # auto-complete / previous item
ctrl+n # auto-complete / next item
ctrl+xctrl+l # auto complete line mode
cool editing stuff
ctrl+w # delete word before cursor
ctrl+u # delete line before cursor
ctrl+r[char] # insert content of register [char]
ctrl+t # increase line indent
ctrl+u # decrease line indent
https://gist.github.com/azadkuh/5d223d46a8c269dadfe4
OPTIONS
:syntax on # enable syntax highlightning
:syntax off # disable syntax highlightning
:set nu / :set number # show line numbers
:set nonu / :set nonumber / set nu! # hide line numbers
VIMDIFF
ctrl+(w+w) # toggle buffer
do # get changes from other window into the current window
dp # put the changes from current window into the other window
]c # jump to the next change
[c # jump to the previous change
zo # open fold
zc # close fold
zr # reducing folding level
zm # one more folding level, please
:diffupdate, :diffu # recalculate the diff
:diffg RE # get from REMOTE
:diffg BA # get from BASE
:diffg LO # get from LOCAL
VIM to VIMDIFF
:vs file # vertical split with file
:split file # horizontal split file
ctrl+w ctrl+w # switch cursors to different split screen
:diffthis # invoke "diff mode" in file
:diffthis # switch to other file and invoke "diff mode"
:diffoff # turn off "diff mode"
SSH
change identity of key in 'authorized_keys'
file="/root/.ssh/authorized_keys"
sudo sed '/manjaro@970g/ s|^.* \(ssh-.*\)$|\1|' $file
sudo systemctl restart sshd.service
APT SOURCES
add contrib to main backports
file="/etc/apt/sources.list"
sed -i '/backports/ s| main| main contrib|' $file
apt update
ZFS
install zfs
apt install zfs-dkms zfsutils-linux # install zfs tools
echo -e "# zfs utils\nzfs" >> /etc/modules
modprobe zfs # or reboot
lsmod | grep zfs # verify zfs module are correctly loaded
systemctl status zfs-* # verify all zfs services are correctkly started
format
fdisk $device # use 36 for FreeBSD type
https://medium.com/@cq94/zfs-vous-connaissez-vous-devriez-1d2611e7dad6
The zfs command configures ZFS datasets within a ZFS storage poo. A dataset is identified by a unique path within the ZFS namespace
A dataset can be one of the following:
- File system
A ZFS dataset of type filesystem can be mounted within the standard system namespace and behaves like other file systems - Volume
A logical volume exported as a raw or block device - Snapshot
A read-only version of a file system or volume at a given point in time. It is specified as filesystem@name or volume@name. - Bookmark
Much like a snapshot, but without the hold on on-disk data. It can be used as the source of a send (but not for a receive). It is specified as filesystem#name or volume#name.
ZFS File System Hierarchy
A ZFS storage pool is a logical collection of devices that provide space for datasets. A storage pool is also the root of the ZFS file system hierarchy
- Snapshots
A snapshot is a read-only copy of a file system or volume - Bookmarks
A bookmark is like a snapshot, a read-only copy of a file system or volume. Bookmarks can be created extremely quickly, compared to snapshots, and they consume no additional space within the pool. Unlike snapshots, bookmarks can not be accessed through the filesystem in any way - Clones
A clone is a writable volume or file system whose initial contents are the same as another dataset. Clones can only be created from a snapshot. As with snapshots, creating a clone is nearly instantaneous, and initially consumes no additional space - Mount Points
Creating a ZFS file system is a simple operation, so the number of file systems per system is likely to be numerous - Deduplication
Deduplication is the process for removing redundant data at the block level, reducing the total amount of data stored
SUBCOMMANDS
subcommand | Designation |
---|---|
CREATE | Creates a new ZFS file system |
DESTROY | Destroys the given dataset |
SNAPSHOT | Creates snapshots with the given names |
ROLLBACK | Roll back the given dataset to a previous snapshot |
CLONE | Creates a clone of the given snapshot |
PROMOTE | Promotes a clone file system to no longer be dependent on its "origin" snapshot |
RENAME | Renames dataset |
LIST | Lists the property information for the given datasets in tabular form |
SET | Sets the property or list of properties to the given value(s) for each dataset |
GET | Displays properties for the given datasets |
INHERIT | Clears the specified property, causing it to be inherited from an ancestor |
USERSPACE | Displays space consumed by, and quotas on, each user in the specified filesystem or snapshot |
GROUPSPACE | Displays space consumed by, and quotas on, each group in the specified filesystem or snapshot |
MOUNT | Displays all ZFS file systems currently mounted or mounts it |
UNMOUNT | Unmounts currently mounted ZFS file systems |
SHARE | Shares available ZFS file systems |
UNSHARE | Unshares currently shared ZFS file systems |
BOOKMARK | Creates a bookmark of the given snapshot |
SEND | Creates a stream representation of the second snapshot |
RECEIVE | Creates a snapshot whose contents are as specified in the stream provided |
ALLOW | Displays permissions or Delegates ZFS administration permission for the file systems to non-privileged users |
UNALLOW | Removes permissions that were granted with the zfs allow command |
HOLD | Adds a single reference, named with the tag argument, to the specified snapshot or snapshots |
HOLDS | Lists all existing user references for the given snapshot or snapshots |
RELEASE | Removes a single reference, named with the tag argument, from the specified snapshot or snapshots |
DIFF | Display the difference between a snapshot of a given filesystem and another snapshot of that filesystem from a later time or the current contents of the filesystem |
PROPERTIES |
CREATE
Creates a new ZFS file system
zfs create [-p] [-o property=value]... filesystem # Creates a new ZFS file system. The file system is automatically mounted according to the mountpoint property inherited from the parent.
-o property=value # Sets the specified property as if the command zfs set property=value was invoked at the same time the dataset was created
-p # Creates all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the mountpoint property inherited from their parent.
Creates a volume of the given size
zfs create [-ps] [-b blocksize] [-o property=value]... -V size volume # Creates a volume of the given size. The volume is exported as a block device in /dev/zvol/path, where path is the name of the volume in the ZFS namespace
-b blocksize # Equivalent to -o volblocksize=blocksize
-o property=value # Sets the specified property as if the zfs set property=value command was invoked at the same time the dataset was created
-p # Creates all the non-existing parent datasets
-s # Creates a sparse volume with no reservation
examples
zfs create -o mountpoint=/var -o compression=lz4 $fs # create a filesytem with a mountpoint & compression options
zfs set quota=10G $fs # set a quota to user 'user'
zfs set compression=lz4 $fs# set lz4 compression for a fs
zfs set mountpoint=/var $fs # set mountpoint for a filesystem
DESTROY
Destroys the given dataset
zfs destroy [-Rfnprv] filesystem|volume
-R # Recursively destroy all dependents, including cloned file systems outside the target hierarchy.
-f # Force an unmount of any file systems using the unmount -f command
-n # Do a dry-run ("No-op") deletion. No data will be deleted
-p # Print machine-parsable verbose information about the deleted data
-r # Recursively destroy all children
-v # Print verbose information about the deleted data
Destroys the given snapshot
The given snapshots are destroyed immediately if and only if the zfs destroy command without the -d option would have destroyed it
zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]...
-R # Recursively destroy all clones of these snapshots, including the clones, snapshots, and children. -d flag will have no effect
-d # Defer snapshot deletion.
-n # Do a dry-run ("No-op") deletion. No data will be deleted
-p # Print machine-parsable verbose information about the deleted data
-r # Destroy (or mark for deferred deletion) all snapshots with this name in descendent file systems
-v # Print verbose information about the deleted data
Destroys the given bookmark
zfs destroy filesystem|volume#bookmark
SNAPSHOT
Creates snapshots with the given names
zfs snapshot [-r] [-o property=value]... filesystem@snapname|volume@snapname...
-o property=value # Sets the specified property; see zfs create for details
-r # Recursively create snapshots of all descendent datasets
ROLLBACK
Roll back the given dataset to a previous snapshot
Roll back the given dataset to a previous snapshot. The -rR options do not recursively destroy the child snapshots of a recursive snapshot. Only direct snapshots of the specified filesystem are destroyed by either of these options. To completely roll back a recursive snapshot, you must rollback the individual child snapshots.
zfs rollback [-Rfr] snapshot
-R # Destroy any more recent snapshots and bookmarks, as well as any clones of those snapshots
-f # Used with the -R option to force an unmount of any clone file systems that are to be destroyed
-r # Destroy any snapshots and bookmarks more recent than the one specified
CLONE
Creates a clone of the given snapshot
zfs clone [-p] [-o property=value]... snapshot filesystem|volume
-o property=value # Sets the specified property
-p # Creates all the non-existing parent datasets. Datasets created in this manner are automatically mounted according to the mountpoint property inherited from their parent
PROMOTE
Promotes a clone file system to no longer be dependent on its "origin" snapshot
zfs promote clone-filesystem
RENAME
Renames dataset
Renames the given dataset. The new target can be located anywhere in the ZFS hierarchy, with the exception of snapshots. Snapshots can only be renamed within the parent file system or volume
zfs rename [-fp] filesystem|volume filesystem|volume
-f # Force unmount any filesystems that need to be unmounted in the process
-p # Creates all the nonexistent parent datasets. Datasets created in this manner are automatically mounted according to the mountpoint property inherited from their parent
Renames snapshot
Recursively rename the snapshots of all descendent datasets. Snapshots are the only dataset that can be renamed recursively
zfs rename -r snapshot snapshot
LIST
Lists the property information for the given datasets in tabular form
zfs list [-r|-d depth] [-Hp] [-o property[,property]...] [-s property]... [-S property]... [-t type[,type]...] [filesystem|volume|snapshot]...
-H # Used for scripting mode. Do not print headers and separate fields by a single tab instead of arbitrary white space
-S property # Same as the -s option, but sorts by property in descending order
-d depth # Recursively display any children of the dataset, limiting the recursion to depth
-o property # A comma-separated list of properties to display
-p # Display numbers in parsable (exact) values
-r # Recursively display any children of the dataset on the command line
-s property # A property for sorting the output by column in ascending order based on the value of the property
-t type # A comma-separated list of types to display, where type is one of filesystem, snapshot, volume, bookmark, or all
examples
zfs list -o name,used,available,readonly,exec,referenced,mountpoint,mounted,quota,clones
zfs list -t all -r $pool # print recursively space used by volumes
SET
Sets the property or list of properties to the given value(s) for each dataset. Only some properties can be edited
zfs set property=value [property=value]... filesystem|volume|snapshot...
GET
Displays properties for the given datasets
zfs get [-r|-d depth] [-Hp] [-o field[,field]...] [-s source[,source]...] [-t type[,type]...] all | property[,property]... filesystem|volume|snapshot|bookmark...
name Dataset name
property Property name
value Property value
source Property source. Can either be local, default, temporary, inherited, or none (-)
-H # Any headers are omitted, and fields are explicitly separated by a single tab instead of an arbitrary amount of space
-d depth # Recursively display any children of the dataset, limiting the recursion to depth
-o field # A comma-separated list of columns to display. name,property,value,source is the default value
-p # Display numbers in parsable (exact) values
-r # Recursively display properties for any children
-s source # A comma-separated list of sources to display
-t type # A comma-separated list of types to display, where type is one of filesystem, snapshot, volume, bookmark, or all
INHERIT
Clears the specified property, causing it to be inherited from an ancestor, restored to default if no ancestor has the property set, or with the -S option reverted to the received value if one exists
zfs inherit [-rS] property filesystem|volume|snapshot...
-r # Recursively inherit the given property for all children
-S # Revert the property to the received value if one exists, otherwise operate as if the -S option was not specified
USERSPACE
Displays space consumed by, and quotas on, each user in the specified filesystem or snapshot. This corresponds to the userused@user, userobjused@user, userquota@user, and userobjquota@user properties.
zfs userspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]... [-t type[,type]...] filesystem|snapshot
-H # Do not print headers, use tab-delimited output
-S field # Sort by this field in reverse order
-i # Translate SID to POSIX ID. The POSIX ID may be ephemeral if no mapping exists
-n # Print numeric ID instead of user/group name
-o field[,field]... # Display only the specified fields from the following set: type, name, used, quota. The default is to display all fields.
-p # Use exact (parsable) numeric output
-s field # Sort output by this field. The -s and -S flags may be specified multiple times to sort first by one field, then by another. The default is -s type -s name
-t # type[,type]... # Print only the specified types from the following set: all, posixuser, smbuser, posixgroup, smbgroup. The default is -t posixuser,smbuser. The default can be changed to include group types.
GROUPSPACE
Displays space consumed by, and quotas on, each group in the specified filesystem or snapshot
zfs groupspace [-Hinp] [-o field[,field]...] [-s field]... [-S field]... [-t type[,type]...] filesystem|snapshot
MOUNT
display
Displays all ZFS file systems currently mounted
zfs mount
mount
Mounts ZFS file systems
zfs mount [-Ov] [-o options] -a | filesystem
-O # Perform an overlay mount
-a # Mount all available ZFS file systems. Invoked automatically as part of the boot process
filesystem # Mount the specified filesystem
-o options # An optional, comma-separated list of mount options to use temporarily for the duration of the mount
-v # Report mount progress
UNMOUNT
Unmounts currently mounted ZFS file systems
zfs unmount [-f] -a | filesystem|mountpoint
-a # Unmount all available ZFS file systems
filesystem|mountpoint # Unmount the specified filesystem
-f # Forcefully unmount the file system, even if it is currently in use
SHARE
Shares available ZFS file systems
zfs share -a | filesystem
-a # Share all available ZFS file systems
filesystem # Share the specified filesystem according to the sharenfs and sharesmb properties
UNSHARE
Unshares currently shared ZFS file systems
zfs unshare -a | filesystem|mountpoint
-a # Unshare all available ZFS file systems
filesystem|mountpoint # Unshare the specified filesystem
BOOKMARK
Creates a bookmark of the given snapshot. Bookmarks mark the point in time when the snapshot was created, and can be used as the incremental source for a zfs send command
zfs bookmark snapshot bookmark
SEND
Creates a stream
Creates a stream representation of the second snapshot, which is written to standard output
zfs send [-DLPRcenpv] [[-I|-i] snapshot] snapshot
-D, --dedup # Generate a deduplicated stream. Blocks which would have been sent multiple times in the send stream will only be sent once. The receiving system must also support this feature to receive a dedu‐
plicated stream
-I snapshot # Generate a stream package that sends all intermediary snapshots from the first snapshot to the second snapshot
-L, --large-block # Generate a stream which may contain blocks larger than 128KB
-P, --parsable # Print machine-parsable verbose information about the stream package generated
-R, --replicate # Generate a replication stream package, which will replicate the specified file system, and all descendent file systems, up to the named snapshot
-e, --embed # Generate a more compact stream by using WRITE_EMBEDDED records for blocks which are stored more compactly on disk by the embedded_data pool feature
-c, --compressed # Generate a more compact stream by using compressed WRITE records for blocks which are compressed on disk and in memory
-i # snapshot # Generate an incremental stream from the first snapshot (the incremental source) to the second snapshot (the incremental target)
-n, --dryrun # Do a dry-run ("No-op") send
-p, --props # Include the dataset's properties in the stream
-v, --verbose # Print verbose information about the stream package generated
Generate a send stream
Generate a send stream, which may be of a filesystem, and may be incremental from a bookmark
zfs send [-Lce] [-i snapshot|bookmark] filesystem|volume|snapshot
-L, --large-block # Generate a stream which may contain blocks larger than 128KB
-c, --compressed # Generate a more compact stream by using compressed WRITE records for blocks which are compressed on disk and in memory
-e, --embed # Generate a more compact stream by using WRITE_EMBEDDED records for blocks which are stored more compactly on disk by the embedded_data pool feature
-i # snapshot|bookmark # Generate an incremental send stream
Generate a send stream
Creates a send stream which resumes an interrupted receive
zfs send [-Penv] -t receive_resume_token
RECEIVE
Creates a snapshot
Creates a snapshot whose contents are as specified in the stream provided on standard input
zfs receive [-Fnsuv] [-d|-e] [-o origin=snapshot] [-o property=value] [-x property] filesystem
-F # Force a rollback of the file system to the most recent snapshot before performing the receive operation
-d # Discard the first element of the sent snapshot's file system name
-e # Discard all but the last element of the sent snapshot's file system name
-n # Do not actually receive the stream
-o origin=snapshot # Forces the stream to be received as a clone of the given snapshot
-o property=value # Sets the specified property as if the command zfs set property=value was invoked immediately before the receive
-s # If the receive is interrupted, save the partially received state, rather than deleting it
-u # File system that is associated with the received stream is not mounted
-v # Print verbose information about the stream and the time required to perform the receive operation
-x property # Ensures that the effective value of the specified property after the receive is unaffected by the value of that property in the send stream (if any), as if the property had been excluded from the send stream
Abort an interrupted receive
Abort an interrupted zfs receive -s, deleting its saved partially received state
zfs receive -A filesystem|volume
ALLOW
Display
Displays permissions that have been delegated on the specified filesystem or volume
zfs allow filesystem|volume
Delegates permission
Delegates ZFS administration permission for the file systems to non-privileged users
zfs allow [-dglu] user|group[,user|group]... perm|@setname[,perm|@setname]... filesystem|volume zfs allow [-dl] -e|everyone perm|@setname[,perm|@setname]... filesystem|volume
-d # Allow only for the descendent file systems
-e|everyone # Specifies that the permissions be delegated to everyone
-g group[,group]... # Explicitly specify that permissions are delegated to the group
-l # Allow "locally" only for the specified file system
-u user[,user]... # Explicitly specify that permissions are delegated to the user
user|group[,user|group]... # Specifies to whom the permissions are delegated. Multiple entities can be specified as a comma-separated list
perm|@setname[,perm|@setname]... # The permissions to delegate. Multiple permissions may be specified as a comma-separated list. Permissions are generally the ability to use a ZFS subcommand or change a ZFS property
Available permissions
NAME TYPE NOTES
allow subcommand Must also have the permission that isbeing allowed
clone subcommand Must also have the 'create' ability and 'mount' ability in the origin file system
create subcommand Must also have the 'mount' ability
destroy subcommand Must also have the 'mount' ability
diff subcommand Allows lookup of paths within a dataset given an object number, and the ability to create snapshots necessary to 'zfs diff'
mount subcommand Allows mount/umount of ZFS datasets
promote subcommand Must also have the 'mount' and 'promote' ability in the origin file system
receive subcommand Must also have the 'mount' and 'create' ability
rename subcommand Must also have the 'mount' and 'create' ability in the new parent
rollback subcommand Must also have the 'mount' ability
send subcommand
share subcommand Allows sharing file systems over NFS or SMB protocols
snapshot subcommand Must also have the 'mount' ability
groupquota other Allows accessing any groupquota@... property
groupused other Allows reading any groupused@... property
userprop other Allows changing any user property
userquota other Allows accessing any userquota@... property
userused other Allows reading any userused@... property
aclinherit property
acltype property
atime property
canmount property
casesensitivity property
checksum property
compression property
copies property
devices property
exec property
filesystem_limit property
mountpoint property
nbmand property
normalization property
primarycache property
quota property
readonly property
recordsize property
refquota property
refreservation property
reservation property
secondarycache property
setuid property
sharenfs property
sharesmb property
snapdir property
snapshot_limit property
utf8only property
version property
volblocksize property
volsize property
vscan property
xattr property
zoned property
time permission
Sets "create time" permissions
zfs allow -c perm|@setname[,perm|@setname]... filesystem|volume
to a set
Defines or adds permissions to a permission set
zfs allow -s @setname perm|@setname[,perm|@setname]... filesystem|volume
UNALLOW
Removes permissions
Removes permissions that were granted with the zfs allow command. No permissions are explicitly denied, so other permissions granted are still in effect
zfs unallow [-dglru] user|group[,user|group]... [perm|@setname[,perm|@setname]...] filesystem|volume zfs unallow [-dlr] -e|everyone [perm|@setname[,perm|@setname]...] filesystem|volume zfs unallow [-r] -c
[perm|@setname[,perm|@setname]...] filesystem|volume
-r # Recursively remove the permissions from this file system and all descendents
from a set
Removes permissions from a permission set
zfs unallow [-r] -s @setname [perm|@setname[,perm|@setname]...] filesystem|volume
HOLD
Adds a single reference, named with the tag argument, to the specified snapshot or snapshots
zfs hold [-r] tag snapshot...
-r # Specifies that a hold with the given tag is applied recursively to the snapshots of all descendent file systems
HOLDS
Lists all existing user references for the given snapshot or snapshots
zfs holds [-r] snapshot...
-r # Lists the holds that are set on the named descendent snapshots, in addition to listing the holds on the named snapshot
RELEASE
Removes a single reference, named with the tag argument, from the specified snapshot or snapshots
zfs release [-r] tag snapshot...
-r # Recursively releases a hold with the given tag on the snapshots of all descendent file systems
DIFF
Display the difference between a snapshot of a given filesystem and another snapshot of that filesystem from a later time or the current contents of the filesystem
zfs diff [-FHt] snapshot snapshot|filesystem
-F # Display an indication of the type of file, in a manner similar to the - option of ls(1).
-F B Block device
-F C Character device
-F / Directory
-F > Door
-F | Named pipe
-F @ Symbolic link
-F P Event port
-F = Socket
-F F Regular file
-H # Give more parsable tab-separated output, without header lines and without arrows.
-t # Display the path's inode change time as the first column of output.
The types of change are:
- The path has been removed
+ The path has been created
M The path has been modified
R The path has been renamed
PROPERTIES
Properties are divided into two types, native and user properties. Native properties either export internal statistics or control ZFS behavior. In addition, native properties are either editable or read-only
available # The amount of space available to the dataset and all its children
compressratio # For non-snapshots, the compression ratio achieved for the used space of this dataset, expressed as a multiplier
createtxg # The transaction group (txg) in which the dataset was created
creation # The time this dataset was created
clones # For snapshots, this property is a comma-separated list of filesystems or volumes which are clones of this snapshot
defer_destroy # This property is on if the snapshot has been marked for deferred destroy by using the zfs destroy -d command
filesystem_count # The total number of filesystems and volumes that exist under this location in the dataset tree
guid # The 64 bit GUID of this dataset or bookmark which does not change over its entire lifetime
logicalreferenced # The amount of space that is "logically" accessible by this dataset
logicalused # The amount of space that is "logically" consumed by this dataset and all its descendents
mounted # For file systems, indicates whether the file system is currently mounted
origin # For cloned file systems or volumes, the snapshot from which the clone was created
receive_resume_token For filesystems or volumes which have saved partially-completed state from zfs receive -s, this opaque token can be provided to zfs send -t to resume and complete the zfs receive
referenced # The amount of data that is accessible by this dataset, which may or may not be shared with other datasets in the pool
refcompressratio # The compression ratio achieved for the referenced space of this dataset, expressed as a multiplier
snapshot_count # The total number of snapshots that exist under this location in the dataset tree
type # The type of dataset: filesystem, volume, or snapshot
used # The amount of space consumed by this dataset and all its descendents
usedby* # The usedby* properties decompose the used properties into the various reasons that space is used
usedbychildren # The amount of space used by children of this dataset, which would be freed if all the dataset s children were destroyed
usedbydataset # The amount of space used by this dataset itself
usedbyrefreservation The amount of space used by a refreservation set on this dataset
usedbysnapshots # The amount of space consumed by snapshots of this dataset
userused@user # The amount of space consumed by the specified user in this dataset
userobjused@user # The userobjused property is similar to userused but instead it counts the number of objects consumed by a user
userrefs # This property is set to the number of user holds on this snapshot
groupused@group # The amount of space consumed by the specified group in this dataset
groupobjused@group # The number of objects consumed by the specified group in this dataset
volblocksize # For volumes, specifies the block size of the volume
written # The amount of space referenced by this dataset
written@snapshot # The amount of referenced space written to this dataset since the specified snapshot
The following native properties can be used to change the behavior of a ZFS dataset
aclinherit=discard|noallow|restricted|passthrough|passthrough-x # Controls how ACEs are inherited when files and directories are created
acltype=off|noacl|posixacl # Controls whether ACLs are enabled and if so what type of ACL to use
atime=on|off # Controls whether the access time for files is updated when they are read
canmount=on|off|noauto # If this property is set to off, the file system cannot be mounted, and is ignored by zfs mount -a
checksum=on|off|fletcher2|fletcher4|sha256|noparity|sha512|skein|edonr # Controls the checksum used to verify data integrity
compression=on|off|gzip|gzip-N|lz4|lzjb|zle # Controls the compression algorithm used for this dataset
context=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level # This flag sets the SELinux context for all files in the file system under a mount point for that file system
fscontext=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level # This flag sets the SELinux context for the file system file system being mounted
defcontext=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level # This flag sets the SELinux default context for unlabeled files
rootcontext=none|SELinux_User:SElinux_Role:Selinux_Type:Sensitivity_Level # This flag sets the SELinux context for the root inode of the file system
copies=1|2|3 # Controls the number of copies of data stored for this dataset
devices=on|off # Controls whether device nodes can be opened on this file system
dnodesize=legacy|auto|1k|2k|4k|8k|16k # Specifies a compatibility mode or literal value for the size of dnodes in the file system.
exec=on|off # Controls whether processes can be executed from within this file system
filesystem_limit=count|none # Limits the number of filesystems and volumes that can exist under this point in the dataset tree
mountpoint=path|none|legacy # Controls the mount point used for this file system
nbmand=on|off # Controls whether the file system should be mounted with nbmand (Non Blocking mandatory locks)
overlay=off|on # Allow mounting on a busy directory or a directory which already contains files or directories
primarycache=all|none|metadata # Controls what is cached in the primary cache (ARC)
quota=size|none # Limits the amount of space a dataset and its descendents can consume
snapshot_limit=count|none # Limits the number of snapshots that can be created on a dataset and its descendents
userquota@user=size|none # Limits the amount of space consumed by the specified user
userobjquota@user=size|none # The userobjquota is similar to userquota but it limits the number of objects a user can create
groupquota@group=size|none # Limits the amount of space consumed by the specified group
groupobjquota@group=size|none # The groupobjquota is similar to groupquota but it limits number of objects a group can consume
readonly=on|off # Controls whether this dataset can be modified
recordsize=size # Specifies a suggested block size for files in the file system
redundant_metadata=all|most # Controls what types of metadata are stored redundantly
refquota=size|none # Limits the amount of space a dataset can consume
refreservation=size|none # The minimum amount of space guaranteed to a dataset, not including its descendents
relatime=on|off # Controls the manner in which the access time is updated when atime=on is set
reservation=size|none # The minimum amount of space guaranteed to a dataset and its descendants
secondarycache=all|none|metadata # Controls what is cached in the secondary cache (L2ARC)
setuid=on|off # Controls whether the setuid bit is respected for the file system
sharesmb=on|off|opts # Controls whether the file system is shared by using Samba USERSHARES and what options are to be used
sharenfs=on|off|opts # Controls whether the file system is shared via NFS, and what options are to be used
logbias=latency|throughput # Provide a hint to ZFS about handling of synchronous requests in this dataset
snapdev=hidden|visible # Controls whether the volume snapshot devices under /dev/zvol/<pool> are hidden or visible
snapdir=hidden|visible # Controls whether the .zfs directory is hidden or visible in the root of the file system as discussed in the Snapshots section
sync=standard|always|disabled # Controls the behavior of synchronous requests (e.g. fsync, O_DSYNC)
version=N|current # The on-disk version of this file system, which is independent of the pool version
volsize=size # For volumes, specifies the logical size of the volume
volmode=default|full|geom|dev|none # This property specifies how volumes should be exposed to the OS
vscan=on|off # Controls whether regular files should be scanned for viruses when a file is opened and closed
xattr=on|off|sa # Controls whether extended attributes are enabled for this file system
zoned=on|off # Controls whether the dataset is managed from a non-global zone
The following three properties cannot be changed after the file system is created, and therefore, should be set when the file system is created
If the properties are not set with the zfs create or zpool create commands, these properties are inherited from the parent dataset. If the parent dataset lacks these properties due to having been created prior to these features being supported, the new file system will have the default values for these properties.
casesensitivity=sensitive|insensitive|mixed # Indicates whether the file name matching algorithm used by the file system should be case-sensitive, case-insensitive, or allow a combination of both styles of matching
normalization=none|formC|formD|formKC|formKD # Indicates whether the file system should perform a unicode normalization of file names whenever two file names are compared, and which normalization algorithm should be used
utf8only=on|off # Indicates whether the file system should reject file names that include characters that are not present in the UTF-8 character code set
rsync
A fast, versatile, remote (and local) file-copying tool
-v, --verbose # increase verbosity
-q, --quiet # suppress non-error messages
-a, --archive # archive mode; equals -rlptgoD (no -H,-A,-X)
-r, --recursive # recurse into directories
-l, --links # copy symlinks as symlinks
-p, --perms # preserve permissions
-t, --times # preserve modification times
-o, --owner # preserve owner (super-user only)
-g, --group # preserve group
--devices # preserve device files (super-user only)
--specials # preserve special files
-D # same as --devices --specials
-e # specify the remote ssh options to use
examples
rsync -rlptDv --delete -n # same as -av --delete but without preserving group & user
rsync -e "ssh -p 22" # use specified port 22 to connect to remote server
rsync Music/ root@node1:/save/vm/nextcloud/data/aguy/files/perso/music/ -rlptDv --delete -n
https://ss64.com/bash/syntax.html
SYNTAX
command > filename # Redirect command output (stdout) into a file
command > /dev/null # Discard stdout of command
command 2> filename # Redirect error output (stderr) to a file
command 2>&1 filename # Redirect stderr to stdout
command 1>&2 filename # Redirect stdout to stderr
command >> filename # Redirect command output and APPEND into a file
command < filename # Redirect a file into a command
command1 < (command2) # Redirect the output of command2 as file input to command1
command1 | tee filename | command2 # Redirect command1 into filename AND command2
command1 | command2 # Redirect stdout of command1 to command2
command1 |& command2 # Redirect stdERR of command1 to command2
command1 & command2 # Run command1 and then run command2 (asynchronous).
command1 ; command2 # Run command1 and afterwards run command2 (synchronous)
command1 && command2 # Run command2 only if command1 is successful (synchronous AND)
command1 || command2 # Run command2 only if command1 is NOT successful
command & # Run command in a subshell.
command &> filename # Redirect every output of command to filename
command > >(tee -a filename1 filename2) # Redirect command output (stdout) to stdout and into filename1 and filename2
# noclobber option can prevent overwriting an existing file
$ set -o noclobber turns ON noclobber
$ set +o noclobber turns OFF noclobber
[n]<word # Redirection of input causes the file whose name results from the expansion of word to be opened for reading on file descriptor n, or the standard input (file descriptor 0) if n is not specified.
[n]>[|]word # Redirection of output causes the file whose name results from the expansion of word to be opened for writing on file descriptor n, or the standard output (file descriptor 1) if n is not specified. If the file does not exist it is created; if it does exist it is truncated to zero size. If the redirection operator is '>', and the noclobber option to the set builtin has been enabled, the redirection will fail if the file whose name results from the expansion of word exists and is a regular file. If the redirection operator is '>|', or the redirection operator is '>' and the noclobber option is not enabled, the redirection is attempted even if the file named by word exists.
[n]>>word # Redirection of output in this fashion causes the file whose name results from the expansion of word to be opened for appending on file descriptor n, or the standard output (file descriptor 1) if n is not specified. If the file does not exist it is created.
# There are three formats for redirecting standard output and standard error:
&>word
>&word
>word 2>&1
ls > dirlist 2>&1 # directs both standard output (file descriptor 1) and standard error (file descriptor 2) to the file dirlist, while the command
ls 2>&1 > dirlist # directs only the standard output to file dirlist, because the standard error was duplicated as standard output before the standard output was redirected to dirlist.
DESCRIPTOR
exec 3< echolist # for reading
exec 3<&-
exec 3>&- # for writing
exec 3<&1.
exec 2> >(tee -a /tmp/2) > >(tee -a /tmp/1) 4>&1 # duplicate stderror & stdout in files & 4 in 1
examples
echo 1234567890 > $file # Write string to file
exec 3<> $file # Open $file and assign fd 3 to it
read -n 4 <&3 # Read only 4 characters
echo -n , >&3 # Write a decimal point there
exec 3>&- # Close fd 3
cat $file # show 1234.67890
SPECIAL FILE FOR REDIRECTIONS
/dev/fd/fd # If fd is a valid integer, file descriptor fd is duplicated
/dev/stdin # File descriptor 0 is duplicated
/dev/stdout # File descriptor 1 is duplicated
/dev/stderr # File descriptor 2 is duplicated
/dev/tcp/host/port # If host is a valid hostname or Internet address, and port is an integer port number, Bash attempts to open a TCP connection to the corresponding socket
/dev/udp/host/port # If host is a valid hostname or Internet address, and port is an integer port number, Bash attempts to open a UDP connection to the corresponding socket
HERE DOCUMENTS
This type of redirection instructs the shell to read input from the current source until a line containing only word (with no trailing blanks) is seen. All of the lines read up to that point are then used as the standard input for a command. If the redirection operator is '<<-', then all leading tab characters are stripped from input lines and the line containing delimiter. This allows here-documents within shell scripts to be indented in a natural fashion
<<[-]word
here-document
word
HERE STRINGS
A here string can be considered as a stripped-down form of a here document.
It consists of nothing more than command <<<$word, where $word is expanded and fed to the stdin of command.
command <<<$word
command <<<"$word" # keep formatting
DUPLICATING FILE DESCRIPTORS
[n]<&word
Is used to duplicate input file descriptors. If word expands to one or more digits, the file descriptor denoted by n is made to be a copy of that file descriptor. If the digits in word do not specify a file descriptor open for input, a redirection error occurs. If word evaluates to '-', file descriptor n is closed. If n is not specified, the standard input (file descriptor 0) is used
[n]>&word
Is used similarly to duplicate output file descriptors. If n is not specified, the standard output (file descriptor 1) is used. If the digits in word do not specify a file descriptor open for output, a redirection error occurs. As a special case, if n is omitted, and word does not expand to one or more digits, the standard output and standard error are redirected as described previously
THE REDIRECTION OPERATOR
[n]<>word
causes the file whose name is the expansion of word to be opened for both reading and writing on file descriptor n, or on file descriptor 0 if n is not specified. If the file does not exist, it is created.
PROCESS SUBSTITUTION
>(commands) & <(commands)
The process list is run with its input or output connected to a FIFO or some file in /dev/fd. The name of this file is passed as an argument to the current command as the result of the expansion. If the >(list) form is used, writing to the file will provide input for list. If the <(list) form is used, the file passed as an argument should be read to obtain the output of list. Note that no space can appear between the < or > and the left parenthesis, otherwise the construct would be interpreted as a redirection.
examples
$(< file) is faster than $(cat file)
COMMAND EDITING
ctrl+ a # go to the start of the command line
ctrl+ e # go to the end of the command line
ctrl+ k # delete from cursor to the end of the command line
ctrl+ u # delete from cursor to the start of the command line
ctrl+ w # delete from cursor to start of word (i.e. delete backwards one word)
ctrl+ y # paste word or text that was cut using one of the deletion shortcuts (such as the one above) after the cursor
ctrl+ xx # move between start of command line and current cursor position (and back again)
alt+ b # move backward one word (or go to start of word the cursor is currently on)
alt+ f # move forward one word (or go to end of word the cursor is currently on)
alt+ d # delete to end of word starting at cursor (whole word if cursor is at the beginning of word)
alt+ c # capitalize to end of word starting at cursor (whole word if cursor is at the beginning of word)
alt+ u # make uppercase from cursor to end of word
alt+ l # make lowercase from cursor to end of word
alt+ t # swap current word with previous
ctrl+ f # move forward one character
ctrl+ b # move backward one character
ctrl+ d # delete character under the cursor
ctrl+ h # delete character before the cursor
ctrl+ t # swap character under cursor with the previous one
COMMAND RECALL
ctrl+ r # search the history backwards
ctrl+ g # escape from history searching mode
ctrl+ p # previous command in history (i.e. walk back through the command history)
ctrl+ n # next command in history (i.e. walk forward through the command history)
alt+ . # use the last word of the previous command
COMMAND CONTROL
ctrl+ l # clear the screen
ctrl+ s # stops the output to the screen (for long running verbose command)
ctrl+ q # allow output to the screen (if previously stopped using command above)
ctrl+ c # terminate the command
ctrl+ z # suspend/stop the command
BASH BANG COMMAND !
!! # run last command
!blah # run the most recent command that starts with ‘blah’ (e.g. !ls)
!blah:p # print out the command that !blah would run (also adds it as the latest command in the command history)
!$ # the last word of the previous command (same as alt+ .)
!$:p # print out the word that !$ would substitute
!* # the previous command except for the last word (e.g. if you type ‘_find somefile.txt /’, then !* would give you ‘_find somefile.txt’)
!*:p # print out what !* would substitute
COMMAND
'<>'' represents the shortcut to enter in command mode of tmux. Initialy ctrl+n, for me ctrl+q
<> : # in a tmux session launch a command line (abstract tmux command)
tmux ls
tmux list-sessions
tmux list-windows -t $session # list windows in session $session (or current)
tmux / tmux new / tmux new-session # launch a new session
tmux new -s $session # launch a new session with a name $session
<> : kill-session -t $session # kill session $session (name or number)
<> : kill-session -a # kill all sessions
<> : kill-session -a -t $session # kill all but session $session
<> : rename-session -t $session# rename session
<> : rename-window -t $session:$window # rename window
<> : detach / : attach -d # detach session
<> : a / : attach -t $session:$windows # attach session / windows (or current session)
<> : switch -t $session # switch to session $session
SHORTCUTS
session
<> + d # detach current session
<> + ) # move to next session
<> + ( # move to previous session
<> + s # interactive session selection
window
<> + c # create a new window in current session
<> + n # switch to next window
<> + p # switch to previous window
<> + $n # switch to a specific terminal $n (start from 0)
<> + , # rename current window
<> + w # interactive window selection
pane
<> + " # new horizontal pane below
<> + % # new vertical pane to the right
<> + x # kill pane
<> + z # toggle between pane zoom
<> + ctrl+n # switch to next pane
<> + ctrl+p # switch to previous pane
<> + { # swap the current pane with the previous pane
<> + } # swap the current pane with the next pane
<> + space # switch layout
<> + q # show panes number
<> + q $n # move to pane $n
<> + ! # convert pane to window
<> + left # move to left
<> + right # move to right
<> + up # move to up
<> + down # move to down
others
<> + ? # show all shortcuts
<> + t # show time in windows
<> + [ # enter in copy-mode (q or <esc> to quit)
perso
<> + R # Reload conf file
<> + m # enable mouse mode
<> + M # disable mouse mode
<> + k # kill window
<> + K # kill server
copy mode
ctrl + PageUp # enter in copy mode & scroll page up
alt + Up / alt+up # enter in copy mode & scroll page up
ctrl+s # search in history in copy mode
PLUGINS
https://github.com/tmux-plugins
RESURRECT
resurect / Persists tmux environment across system restarts
https://github.com/tmux-plugins/tmux-resurrect
<> + ctrl+s # to save current worspace (sessions)
<> + ctrl+r # to restore worspace (sessions)
The last saved worspace is a symlink 'last' to the active worspace in ~/.tmux/resurrect/ where are keeped history of saved works
autostart
sudo cat > /etc/systemd/system/tmux.service << EOF
[Unit]
Description=tmux default session (detached)
Documentation=man:tmux(1)
[Service]
Type=forking
User=nikita
Group=nikita
Environment=DISPLAY=:0
ExecStart=/usr/bin/tmux new-session -d
ExecStop=/home/${USER}/.tmux/plugins/resurect/scripts/save.sh
ExecStop=/usr/bin/tmux kill-server
KillMode=none
RestartSec=2
[Install]
WantedBy=default.target
EOF
systemctl enable tmux --now
install
conf
sudo sh -c "echo '\n# sublime-text hack\n127.0.0.1\tsublimetext.com\n127.0.0.1\twww.sublimetext.com\n127.0.0.1\tlicense.sublimehq.com' >> /etc/hosts"
apt install -y iptables-persistent
ips="45.55.255.55"
for ip in ${ips}; do sudo iptables -A OUTPUT -d ${ip} -j DROP; done
path="/etc/iptables"
[ -d "${path}" ] || sudo mkdir "${path}"
sudo iptables-save -f /etc/iptables/rules.v4
build 3211
----- BEGIN LICENSE -----
Member J2TeaM
Single User License
EA7E-1011316
D7DA350E 1B8B0760 972F8B60 F3E64036
B9B4E234 F356F38F 0AD1E3B7 0E9C5FAD
FA0A2ABE 25F65BD8 D51458E5 3923CE80
87428428 79079A01 AA69F319 A1AF29A4
A684C2DC 0B1583D4 19CBD290 217618CD
5653E0A0 BACE3948 BB2EE45E 422D2C87
DD9AF44B 99C49590 D2DBDEE1 75860FD2
8C8BB2AD B2ECE5A4 EFC08AF2 25A9B864
------ END LICENSE ------
build 3176
—– BEGIN LICENSE —–
sgbteam
Single User License
EA7E-1153259
8891CBB9 F1513E4F 1A3405C1 A865D53F
115F202E 7B91AB2D 0D2A40ED 352B269B
76E84F0B CD69BFC7 59F2DFEF E267328F
215652A3 E88F9D8F 4C38E3BA 5B2DAAE4
969624E7 DC9CD4D5 717FB40C 1B9738CF
20B3C4F1 E917B5B3 87C38D9C ACCE7DD8
5F7EF854 86B9743C FADC04AA FB0DA5C0
F913BE58 42FEA319 F954EFDD AE881E0B
—— END LICENSE ——
packagecontrol
https://packagecontrol.io # packages info + shortcuts
COLOR SCHEME
package control / Install Package / PackageResourceViewer
package control / PackageResourceViewer / Open Resource / select scheme
package control / PackageResourceViewer / Theme - Default / adaptive / Adaptive.sublime-theme
SIZE
/home/$USER/.config/sublime-text-3/Packages/Theme - Default/adaptive/Adaptive.sublime-theme
/home/$USER/.config/sublime-text-3/Packages/Theme - Default/Default.sublime-theme
PACKAGE
python
- Anaconda
Anaconda turns your Sublime Text 3 in a full featured Python development IDE - Python Debugger
Graphical Debugger for Sublime Text - Python 3
Python 3 and Cython language bundles for Sublime Text and TextMate - DocBlockr_Python
Sublime Text DocBlockr for python. Simplifies writing docstring comments in Python
https://wiki.manjaro.org/index.php?title=Pacman_Overview
https://wiki.archlinux.fr/pacman
https://community.chakralinux.org/t/how-to-use-pacman-to-search-for-install-upgrade-and-uninstall-packages
PACMAN CONF
all configurations for pacman are stored in:
/etc/pacman.conf
UPGRADE -U
yay -U /var/cache/pacman/pkg/python-3.9.3-1-x86_64.pkg.tar.zst # install a specific version
IGNORE
/etc/pacman.conf
sudo sed -i '/^#IgnorePkg/a IgnorePkg = libvirt' /etc/pacman.conf
sudo sed -i '/^#IgnorePkg/a Include = /etc/pacman.d/ignorepcks' /etc/pacman.conf
sudo sh -c "echo -e 'IgnorePkg = libvirt\nIgnorePkg = libvirt-python' > /etc/pacman.d/ignorepcks"
SYNC -S
pacman -Si $package # get informlation about package
pacman -Sl $repo # list all packages from repository
pacman -Sy # update local packages list
pacman -Syy # replace local packages list
pacman -Su # upgrade packages
pacman -Suu # upgrade & downgrade if needed
pacman -Syu # update & upgrade
pacman -Syyu
pacman -Syyuu
pacman -Syuc # update, upgrade, clean
pacman -S package # install package
pacman -S core/package_name # install a specified repo/core package
pacman -S plasma-{desktop,mediacenter,nm} # install tree name packages
pacman -Ss $regex # search packages
pacman -Ssq $regex # search packages with minimal print
pacman -S $pack... # install packages
pacman -S $repo/$pack # install package from specific repo
REMOVE -R
pacman -R $pack... # remove packages
pacman -Rs $pack... # remove packages and dependencies
pacman -Ru $pack... # remove unneeded packages
pacman -Qdt # list orphane packages
pacman -Rsn $(pacman -Qdtq) # remove orphan packages
QUERY -Q
pacman -Qqettn # List of Native installed packages
pacman -Qqettm # List of AUR installed packages
DOWNGRADE
pamac install downgrade # install downgrade
downgrade $package # interactive downgrade package
PACMAN-MIRRORS
pacman-mirrors --country-list # get list of countries for repositories
pacman-mirrors -c France # select a country for repositories
pacman-mirrors -i # generate an interactive mirrolist
pacman-mirrors -c all # reset custom mirrorlist
PACTREE
pactree package_name # list all the packages recursively depending on an installed package
pactree -r package_name # whoneeds package_name
AUR REPO & YAOURT
https://wiki.manjaro.org/index.php/D%C3%A9p%C3%B4t_AUR_%28Arch_User_Repository%29
POST-INSTALLATION
http://stephenmyall.com/manjaro/
https://dolys.fr/forums/topic/mon-installation-post-installation-manjaro/
AUDIO
For audio micro+headphone 08bb:2902
file=/lib/udev/rules.d/90-pulseaudio.rules
sudo cp -a ${file} ${file}.$(date +%s)
sudo sed -i '/08bb.*2902.*behringer-umc22/ s|^|#|' ${file}
LXD
cgroup v2 error messaging
file=/lib/udev/rules.d/90-pulseaudio.rules
part='systemd.unified_cgroup_hierarchy=0'
sudo cp -a ${file} ${file}.$(date +%s)
grep -q "^GRUB_CMDLINE_LINUX=.*${part}" ${file} || sudo sed -i '/^GRUB_CMDLINE_LINUX=/ s|"$| ${part}"|' ${file}
sudo update-grub
EFIVARFS
file=/etc/default/grub
part='efi=runtime'
sudo cp -a ${file} ${file}.$(date +%s)
grep -q "^GRUB_CMDLINE_LINUX=.*${part}" ${file} || sudo sed -i '/^GRUB_CMDLINE_LINUX=/ s|"$| ${part}"|' ${file}
sudo update-grub
&
file=/etc/mkinitcpio.conf
part='efivarfs'
sudo cp -a ${file} ${file}.$(date +%s)
grep -q "^MODULES=.*${part}" ${file} || sed -i '/^MODULES=/ s|)$| ${part})|' ${file}
sudo mkinitcpio -P
LOGIN-SCREEN
/etc/dconf/db/gdm.d/02-logo
[org/gnome/login-screen]
logo='/path/to/logo.png'
logo
gsettings set org.gnome.login-screen logo '/path/to/logo.png'
dconf # /org/gnome/login-screen/logo
https://wiki.archlinux.org/index.php/GDM
DEVICES
lsmod # show modules
lspci -k # show pci devices with used informations
inxi -Fxz # show details of device
VIDEO
inxi -G # show video informations
sudo mhwd -a pci nonfree 0300 # install non free driver
mhwd -li # show installed driver from mhwd
yay -S system-config-printer manjaro-printer
sudo usermod -a -G cups $USER
sudo gpasswd -a $USER sys
sudo systemctl enable --now org.cups.cupsd.service
RADIOTRAY
sudo ln -sv /usr/lib/libjsoncpp.so.24 /usr/lib/libjsoncpp.so.22
KVM QEMU Virt-Manager
https://computingforgeeks.com/install-kvm-qemu-virt-manager-arch-manjar/
GRUB COMMAND
press 'c' while launching to interrupt grub launching and entre in grub command
CHANGE GRUB DISPLAY RESOLUTION
vbeinfo # list available resolutions
modify in /etc/default/grub (ex: 800x600)
GRUB_GFXMODE=$resolution
CLEAR SECTOR 32
dd if=/dev/zero of=/dev/sda bs=512 count=1 seek=32
grub-install $device
USB3 MSI GAMING
add in /etc/default/grub
GRUB_CMDLINE_LINUX="iommu=soft"
LXD
/etc/default/grub
GRUB_CMDLINE_LINUX="... systemd.unified_cgroup_hierarchy=0
GRUB_TIMEOUT
Modify set timeout for choosed value in /boot/grub/grub.cfg
if [ "$recordfail_broken" = 1 ]; then
cat << EOF
if [ \$grub_platform = efi ]; then
set timeout=${GRUB_RECORDFAIL_TIMEOUT:-30}
if [ x\$feature_timeout_style = xy ] ; then
set timeout_style=menu
fi
fi
EOF
fi
MANJARO
OS real name for btrfs
file="/etc/grub.d/30_os-prober"
file_keep="$file.keep$(date +%s)"
if ! grep -q 'LONGNAME="${LONGNAME} ${BTRFSsubvol/#subvol=/}"' "$file"; then
sudo cp -a "$file" "$file.keep$(date +%s)" && sudo chmod -x "$file_keep"
sudo sed -i "/LONGNAME=\"\${LABEL}\"/ a\ else\n LONGNAME=\"\${LONGNAME} \${BTRFSsubvol/#subvol=/}\"" "$file"
sudo update-grub
fi
UBUNTU 18.04
Remove error in startup log : systemd-backlight@backlight:acpi_video0.service, add in /etc/default/grub
acpi_osi='!Windows 2012'
acpi_backlight=vendor in GRUB_CMDLINE_LINUX_DEFAULT in /etc/default/grub
BOOT ENTRY
https://linux.die.net/man/8/efibootmgr
efibootmgr
efibootmgr # list all boot entries
-v # list all boot entries with details
-B -b XXXX # delete boot entry number XXXX
example for 970g
efibootmgr -c -g -d /dev/sda1 -p 1 -w -L 'Manjaro' -l '\EFI\Manjaro\grubx64.efi'
efibootmgr -c -g -d /dev/sda1 -p 1 -w -L 'ubuntu' -l '\EFI\ubuntu\shimx64.efi'
efibootmgr -c -g -d /dev/sda1 -p 1 -w -L 'Windows Boot Manager' -l '\EFI\Microsoft/Boot\bootmgfw.efi'
>>
Timeout: 1 seconds
BootOrder: 0000,0001,0002
Boot0000* Manjaro HD(1,GPT,8e91a305-046d-4e90-8548-efca286325a7,0x800,0x32000)/File(\EFI\Manjaro\grubx64.efi)
Boot0001* ubuntu HD(1,GPT,8e91a305-046d-4e90-8548-efca286325a7,0x800,0x32000)/File(\EFI\ubuntu\shimx64.efi)
Boot0002* Windows Boot Manager HD(1,GPT,8e91a305-046d-4e90-8548-efca286325a7,0x800,0x32000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)
256colors for nested tmux
20.04
export TERM=xterm-256color
turn off/disable bluetooth device startup
18.04
grep -n DEVICES_TO_DISABLE_ON_STARTUP /etc/default/tlp
auto login
sudo sh -c "echo '
# autologin
[Seat:*]
autologin-session=xubuntu
autologin-user=${USER}
autologin-user-timeout=0' >> /etc/lightdm/lightdm.conf
INSTALL
install
apt-get install -y opendkim opendkim-tools
data
domain="17112018.fr"
path_keys="/etc/opendkim/keys"
dkim="dkim"
KEYS
mkdir -p ${path_keys}/${domain}
cd ${path_keys}/${domain}
opendkim-genkey --bits=2048 -s ${dkim} -d ${domain}
chown opendkim:opendkim ${dkim}.private
chmod g-rwx ${dkim}.private
test
opendkim-testkey -d ${domain} -s ${dkim} -k /etc/opendkim/keys/${domain}/${dkim}.private -vvv
CONF
/etc/opendkim.conf
AutoRestart Yes
AutoRestartRate 10/1h
UMask 002
Syslog yes
SyslogSuccess Yes
LogWhy Yes
Canonicalization relaxed/simple
ExternalIgnoreList refile:/etc/opendkim/TrustedHosts
InternalHosts refile:/etc/opendkim/TrustedHosts
KeyTable refile:/etc/opendkim/KeyTable
SigningTable refile:/etc/opendkim/SigningTable
Mode sv
PidFile /var/run/opendkim/opendkim.pid
SignatureAlgorithm rsa-sha256
UserID opendkim:opendkim
Socket inet:12301@localhost
/etc/default/opendkim
SOCKET="inet:12301@localhost"
/etc/postfix/main.cf
milter_protocol = 2
milter_default_action = accept
# without spamassassin
smtpd_milters = inet:localhost:12301
non_smtpd_milters = inet:localhost:12301
# with spamassassin
#smtpd_milters = unix:/spamass/spamass.sock, inet:localhost:12301
#non_smtpd_milters = unix:/spamass/spamass.sock, inet:localhost:12301
/etc/opendkim/TrustedHosts
127.0.0.1
localhost
# IP senders
$SENDER_IP
# Domains senders
*.${domain}
/etc/opendkim/KeyTable
${dkim}._domainkey.${domain} ${domain}:${dkim}:${path_keys}/${domain}/${dkim}.private
/etc/opendkim/SigningTable
*@${domain} ${dkim}._domainkey.${domain}
RESTART
systemctl restart postfix opendkim
SENDER
/etc/postfix/main.cf
relayhost = [$receiver_ip]
RECEIVER
/etc/postfix/main.cf
myhostname = $domain_to_relay
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = $myhostname, localhost.localdomain, localhost
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 $sender_ip1 $sender_ip2
virtual_alias_maps = hash:/etc/postfix/virtual
alias
/etc/postfix/virtual
$email_alias $email_to_send
Compile modifications
postmap /etc/postfix/virtual
postfix reload
postmap -q $email_alias /etc/postfix/virtual
RESTART
systemctl restart postfix.service
ALIASES / EXIM4
http://debian-facile.org/doc:reseau:exim4:redirection-mails-locaux
# add alias to container
ls /vm/root/*/etc/aliases|xargs -L 1 sed -i "\$a root: tech@17112018.fr"
cat /vm/root/*/etc/aliases
vz-launch -y "newaliases" all
vz-launch -y "exim -bt root" all
vz-launch -y "systemctl restart exim4" all
# test alias
exim -bt root
INSTALL
get mybb-1.8.19.tgz & put it in $path_install
domain="17112018.fr"
subdomain="forum"
path_install="/var/share/www/17112018.fr"
path="$path_install/forum"
version="1.8.19"
file_mybb="mybb-${version}"
cd "$path_install"
tar xzf "${file_mybb}.tgz"
ln -sv "$file_mybb" "$subdomain"
ln -sv "$file_mybb/Upload/" "$subdomain"
PRECONF
cp -a "$subdomain/htaccess.txt" "$subdomain/.htaccess"
sed -i '/Order Deny,Allow/d' "$subdomain/.htaccess"
sed -i 's/Deny from all/Require all denied/' "$subdomain/.htaccess"
# add subdomain in apache conf
nano /etc/apache2/sites-available/$domain.conf
sysars
a2ensite ${domain}.conf
SETUP
https://${subdomain}.${domain} # launch
POSTCONF
systemctl restart apache2.service
nano inc/config.php
chmod 666 "$subdomain/inc/config.php" "$subdomain/inc/settings.php"
chmod 777 "$subdomain/cache/" "$subdomain/cache/themes/" "$subdomain/uploads/" "$subdomain/uploads/avatars/"
DOMAINS
domains_17112018_fr="17112018.fr,ada.17112018.fr,admin.17112018.fr,blog.17112018.fr,carte.17112018.fr,chat.17112018.fr,chiffres.17112018.fr,cloud.17112018.fr,cms.17112018.fr,code.17112018.fr,compta.17112018.fr,datadoghq.17112018.fr,democratie.17112018.fr,dev.17112018.fr,diaspora.17112018.fr,discord.17112018.fr,discourse.17112018.fr,down.17112018.fr,elastik.17112018.fr,fil.17112018.fr,filerun.17112018.fr,files.17112018.fr,forum.17112018.fr,gestion.17112018.fr,git.17112018.fr,gitea.17112018.fr,gitlab.17112018.fr,goaccess.17112018.fr,grav.17112018.fr,graylog.17112018.fr,info.17112018.fr,kibana.17112018.fr,lcr.17112018.fr,liens.17112018.fr,links.17112018.fr,log.17112018.fr,manage.17112018.fr,matomo.17112018.fr,metrics.17112018.fr,monitor.17112018.fr,nextcloud.17112018.fr,pfa.17112018.fr,piwik.17112018.fr,plumxml.17112018.fr,pma.17112018.fr,roundcube.17112018.fr,shaarli.17112018.fr,snippet.17112018.fr,social.17112018.fr,st.17112018.fr,statistiques.17112018.fr,stats.17112018.fr,test.17112018.fr,tiddly.17112018.fr,tuleap.17112018.fr,vma.17112018.fr,vmail.17112018.fr,webmail.17112018.fr,wiki.17112018.fr,www.17112018.fr,zabbix.17112018.fr"
domains_17112018_ovh="17112018.ovh,ada.17112018.ovh,admin.17112018.ovh,blog.17112018.ovh,carte.17112018.ovh,chat.17112018.ovh,chiffres.17112018.ovh,cloud.17112018.ovh,cms.17112018.ovh,code.17112018.ovh,compta.17112018.ovh,datadoghq.17112018.ovh,democratie.17112018.ovh,dev.17112018.ovh,diaspora.17112018.ovh,discord.17112018.ovh,discourse.17112018.ovh,down.17112018.ovh,elastik.17112018.ovh,fil.17112018.ovh,filerun.17112018.ovh,files.17112018.ovh,forum.17112018.ovh,gestion.17112018.ovh,git.17112018.ovh,gitea.17112018.ovh,gitlab.17112018.ovh,goaccess.17112018.ovh,grav.17112018.ovh,graylog.17112018.ovh,info.17112018.ovh,kibana.17112018.ovh,lcr.17112018.ovh,liens.17112018.ovh,links.17112018.ovh,log.17112018.ovh,manage.17112018.ovh,matomo.17112018.ovh,metrics.17112018.ovh,monitor.17112018.ovh,nextcloud.17112018.ovh,pfa.17112018.ovh,piwik.17112018.ovh,plumxml.17112018.ovh,pma.17112018.ovh,roundcube.17112018.ovh,shaarli.17112018.ovh,snippet.17112018.ovh,social.17112018.ovh,st.17112018.ovh,statistiques.17112018.ovh,stats.17112018.ovh,test.17112018.ovh,tiddly.17112018.ovh,tuleap.17112018.ovh,vma.17112018.ovh,vmail.17112018.ovh,webmail.17112018.ovh,wiki.17112018.ovh,www.17112018.ovh,zabbix.17112018.ovh"
domains_ambau_ovh="ambau.ovh,ada.ambau.ovh,admin.ambau.ovh,blog.ambau.ovh,carte.ambau.ovh,chat.ambau.ovh,chiffres.ambau.ovh,cloud.ambau.ovh,cms.ambau.ovh,code.ambau.ovh,compta.ambau.ovh,datadoghq.ambau.ovh,democratie.ambau.ovh,dev.ambau.ovh,diaspora.ambau.ovh,discord.ambau.ovh,discourse.ambau.ovh,down.ambau.ovh,elastik.ambau.ovh,fil.ambau.ovh,filerun.ambau.ovh,files.ambau.ovh,forum.ambau.ovh,gestion.ambau.ovh,git.ambau.ovh,gitea.ambau.ovh,gitlab.ambau.ovh,goaccess.ambau.ovh,grav.ambau.ovh,graylog.ambau.ovh,info.ambau.ovh,kibana.ambau.ovh,lcr.ambau.ovh,liens.ambau.ovh,links.ambau.ovh,log.ambau.ovh,manage.ambau.ovh,matomo.ambau.ovh,metrics.ambau.ovh,monitor.ambau.ovh,nextcloud.ambau.ovh,pfa.ambau.ovh,piwik.ambau.ovh,plumxml.ambau.ovh,pma.ambau.ovh,roundcube.ambau.ovh,shaarli.ambau.ovh,snippet.ambau.ovh,social.ambau.ovh,st.ambau.ovh,statistiques.ambau.ovh,stats.ambau.ovh,test.ambau.ovh,tiddly.ambau.ovh,tuleap.ambau.ovh,vma.ambau.ovh,vmail.ambau.ovh,webmail.ambau.ovh,wiki.ambau.ovh,www.ambau.ovh,zabbix.ambau.ovh"
domains_ggj_fr="ggj.fr,ada.ggj.fr,admin.ggj.fr,blog.ggj.fr,carte.ggj.fr,chat.ggj.fr,chiffres.ggj.fr,cloud.ggj.fr,cms.ggj.fr,code.ggj.fr,compta.ggj.fr,datadoghq.ggj.fr,democratie.ggj.fr,dev.ggj.fr,diaspora.ggj.fr,discord.ggj.fr,discourse.ggj.fr,down.ggj.fr,elastik.ggj.fr,fil.ggj.fr,filerun.ggj.fr,files.ggj.fr,forum.ggj.fr,gestion.ggj.fr,git.ggj.fr,gitea.ggj.fr,gitlab.ggj.fr,goaccess.ggj.fr,grav.ggj.fr,graylog.ggj.fr,info.ggj.fr,kibana.ggj.fr,lcr.ggj.fr,liens.ggj.fr,links.ggj.fr,log.ggj.fr,manage.ggj.fr,matomo.ggj.fr,metrics.ggj.fr,monitor.ggj.fr,nextcloud.ggj.fr,pfa.ggj.fr,piwik.ggj.fr,plumxml.ggj.fr,pma.ggj.fr,roundcube.ggj.fr,shaarli.ggj.fr,snippet.ggj.fr,social.ggj.fr,st.ggj.fr,statistiques.ggj.fr,stats.ggj.fr,test.ggj.fr,tiddly.ggj.fr,tuleap.ggj.fr,vma.ggj.fr,vmail.ggj.fr,webmail.ggj.fr,wiki.ggj.fr,www.ggj.fr,zabbix.ggj.fr"
domains_ggj_ovh="ggj.ovh,ada.ggj.ovh,admin.ggj.ovh,blog.ggj.ovh,carte.ggj.ovh,chat.ggj.ovh,chiffres.ggj.ovh,cloud.ggj.ovh,cms.ggj.ovh,code.ggj.ovh,compta.ggj.ovh,datadoghq.ggj.ovh,democratie.ggj.ovh,dev.ggj.ovh,diaspora.ggj.ovh,discord.ggj.ovh,discourse.ggj.ovh,down.ggj.ovh,elastik.ggj.ovh,fil.ggj.ovh,filerun.ggj.ovh,files.ggj.ovh,forum.ggj.ovh,gestion.ggj.ovh,git.ggj.ovh,gitea.ggj.ovh,gitlab.ggj.ovh,goaccess.ggj.ovh,grav.ggj.ovh,graylog.ggj.ovh,info.ggj.ovh,kibana.ggj.ovh,lcr.ggj.ovh,liens.ggj.ovh,links.ggj.ovh,log.ggj.ovh,manage.ggj.ovh,matomo.ggj.ovh,metrics.ggj.ovh,monitor.ggj.ovh,nextcloud.ggj.ovh,pfa.ggj.ovh,piwik.ggj.ovh,plumxml.ggj.ovh,pma.ggj.ovh,roundcube.ggj.ovh,shaarli.ggj.ovh,snippet.ggj.ovh,social.ggj.ovh,st.ggj.ovh,statistiques.ggj.ovh,stats.ggj.ovh,test.ggj.ovh,tiddly.ggj.ovh,tuleap.ggj.ovh,vma.ggj.ovh,vmail.ggj.ovh,webmail.ggj.ovh,wiki.ggj.ovh,www.ggj.ovh,zabbix.ggj.ovh"
domains_otokoz_ovh="otokoz.ovh,ada.otokoz.ovh,admin.otokoz.ovh,blog.otokoz.ovh,carte.otokoz.ovh,chat.otokoz.ovh,chiffres.otokoz.ovh,cloud.otokoz.ovh,cms.otokoz.ovh,code.otokoz.ovh,compta.otokoz.ovh,datadoghq.otokoz.ovh,democratie.otokoz.ovh,dev.otokoz.ovh,diaspora.otokoz.ovh,discord.otokoz.ovh,discourse.otokoz.ovh,down.otokoz.ovh,elastik.otokoz.ovh,fil.otokoz.ovh,filerun.otokoz.ovh,files.otokoz.ovh,forum.otokoz.ovh,gestion.otokoz.ovh,git.otokoz.ovh,gitea.otokoz.ovh,gitlab.otokoz.ovh,goaccess.otokoz.ovh,grav.otokoz.ovh,graylog.otokoz.ovh,info.otokoz.ovh,kibana.otokoz.ovh,lcr.otokoz.ovh,liens.otokoz.ovh,links.otokoz.ovh,log.otokoz.ovh,manage.otokoz.ovh,matomo.otokoz.ovh,metrics.otokoz.ovh,monitor.otokoz.ovh,nextcloud.otokoz.ovh,pfa.otokoz.ovh,piwik.otokoz.ovh,plumxml.otokoz.ovh,pma.otokoz.ovh,roundcube.otokoz.ovh,shaarli.otokoz.ovh,snippet.otokoz.ovh,social.otokoz.ovh,st.otokoz.ovh,statistiques.otokoz.ovh,stats.otokoz.ovh,test.otokoz.ovh,tiddly.otokoz.ovh,tuleap.otokoz.ovh,vma.otokoz.ovh,vmail.otokoz.ovh,webmail.otokoz.ovh,wiki.otokoz.ovh,www.otokoz.ovh,zabbix.otokoz.ovh"
domains_coworking-lannion_org="cloud.coworking-lannion.org"
#domains="coworking-lannion.org"
domains="17112018.fr 17112018.ovh ambau.ovh ggj.fr ggj.ovh otokoz.ovh";
CLEAN
Clean one domain
domain="17112018.fr"
files="/etc/letsencrypt/archive/${domain} /etc/letsencrypt/csr/${domain} /etc/letsencrypt/keys/${domain} /etc/letsencrypt/live/${domain} /etc/letsencrypt/renewal/${domain}.conf"
for file in $files; do [ -e "$file" ] && rm -fR "$file"; done
Clean all domains
files="/etc/letsencrypt/archive/* /etc/letsencrypt/csr/* /etc/letsencrypt/keys/* /etc/letsencrypt/live/* /etc/letsencrypt/renewal/*"
for file in $files; do [ -e "$file" ] && rm -fR "$file"; done
CERBOT
stop services OpenVZ
ctids_web="$(vzlist -h "*-php*" -Ho ctid|xargs) $(vzlist -h "*-apache*" -Ho ctid|xargs)"
netstat -lnt | grep ':80'
# stop port 80
service haproxy stop
vz-launch -y 'systemctl stop apache2.service' $ctids_web
#vz-launch -y 'duniter stop' 189
netstat -lnt | grep ':80'
create new domain
for domain in ${domains}; do
subdomains=domains_${domain//./_}
echo "-- ${domain} --"
certbot certonly --standalone -d ${!subdomains}
cat /etc/letsencrypt/live/${domain}/fullchain.pem /etc/letsencrypt/live/${domain}/privkey.pem > /etc/server/ssl/private/letsencrypt-${domain}.pem
done
renew domains
file="/etc/server/ssl/private/letsencrypt.pem.lst"
domains=$(ls /etc/letsencrypt/renewal/|sed 's|\.conf||g')
certbot renew
[ -f "$file" ] && mv "$file" "$file.keep$(date +%s)"
for domain in ${domains}; do
file_pem="/etc/server/ssl/private/letsencrypt-${domain}.pem"
cat /etc/letsencrypt/live/${domain}/fullchain.pem /etc/letsencrypt/live/${domain}/privkey.pem > "$file_pem"
echo "${file_pem}" >> "${file}"
done
start services
# start port 80
vz-launch -y 'systemctl start apache2.service' $ctids_web
#vz-launch -y 'duniter webstart --webmhost 10.0.0.189' 189
service haproxy start
netstat -lnt | grep ':80'
phpquery
print informations about php version & modules
-v version -s sapi_name -M
-V # list all available versions of php client
-v $version -S # list all available sapi
-v $version -S $sapi -M # list all available modules
-v $version -s $sapi -m $module # return informations about module $module
tests the availability of the module
phpquery -q -v $version -s $sapi -m $module && echo YES
PROCESS
!Important you have to launch command with & in start to allow run/stop processes
ps # Get a listing of processes running on the system
jobs # Display a list of current children jobs running in the background
fg ID # Move a background children process into the foreground
bg ID # Move a process into background & start it if a process are already in background
ctrl+z # Pause the current foreground process & move it into the background
kill %ID # End the background running process by his id
nohup $CMD & # Launch in detached process command
wait # Suspend script execution until all children processes running in background have terminated
wait $PID # Suspend script execution until the process with PID id have terminated
cat >pipe # read from stdin
ctrl+d # send EOF to pipe
JOBS
# Display status of jobs
jobs
-l # lists all informations about all processes
-n # lists only processes that have changed status since the last notification
-p # lists PIDs
-r # restrict output to running jobs
-s # restrict output to stopped jobs
job identifier
%N # Job number [N]
%S # Invocation (command-line) of job begins with string S
%?S # Invocation (command-line) of job contains within it string S
%% # "current" job (last job stopped in foreground or started in background)
%+ # "current" job (last job stopped in foreground or started in background)
%- # Last job
$! # Last background process
DISOWN
# unbind jobs from current shell & jobs control; kepp ONLY running jobs
disown
-a # remove all jobs
-h # remobe BUT hold it in jobs control
-r # remove only running jobs
KILL
Note that if the process you are trying to stop by PID is in your shell's job table, it may remain visible there, but terminated, until the process is fg'd again.
kill -TSTP $PID # SIGTSTP, 'polite' stop
kill -STOP $PID # SIGSTOP, 'hard' stop
kill signal -CONT $PID # SIGCONT, resume execution of the process
pkill -f PATTERN # send kill
SID
# Run a program in a new session
setsid $cmd
-w $cmd # Run a program in a new session & wait process finished
PROCESS LIST
pgrep
# return pids list of process matched command name
pgrep $PATTERN
example
pgrep -lf $PATTERN # return full informations process list with matched command name
/proc/PID
cat /proc/PID/status|awk '{print $5}' # return PGRP, ps is VERY LONG *10 / use instead cat /proc...|awk, see man proc
ps/pidof
ps -l $PID # long format
ps -f --pid $PID # full format
ps --ppid $PID # return children processes of parent process $PID
ps auT # return infos on processes associated with this terminal
ps -C $CMD # return process started by command $CMD
ps -g 15603 -o uid,pid,ppid,pgid,egid,rgid,pgrp,tty,time,cmd # return PID process of selected PGRP= PGID=
ps -o pid --no-headers --ppid PID # return chilren PIDS of parent PID
pidof $CMD # return list of pid started with command
EXAMPLES
jobs playing
timeout 20s sleep 200 &
jobs -l
disown -h %1
exit
lock process
FILEPID=/var/run/${0##*/}.pid
FILELOCK=/var/lock/${0##*/}.lock
PID=$$
(
trap "rm -f $FILEPID; exit" INT TERM EXIT
flock -xw 5 9 && echo "IN $PID" || (echo "///// ERROR \\\\\\" && exit 1)
echo $PID >$FILEPID
sleep 3
echo OUT $PID
) 9>$FILELOCK
listen stdin pipe
tail -f $fifo| while read line; do $cmd; done
tail -f /tmp/pipe | while read line; do ssh root@10.0.0.201 "echo $line >> /tmp/toto"; done
read test < <(echo hello world)
# from pipe
read a <$fifo
while read line; do $cmd; done < <(pipe)
while read line; do $cmd; done < <(tail -f pipe)
nc server
nc -l 12345 | nc www.google.com 80
Port 12345 represents the request This starts a nc server on port 12345 and all the connections get redirected to google.com:80. If a web browser makes a request to nc, the request will be sent to google but the response will not be sent to the web browser. That is because pipes are unidirectional. This can be worked around with a named pipe to redirect the input and output.
mkfifo $fifo
nc -l 12345 0<$fifo | nc www.google.com 80 1>$fifo
PRIORITY
0 emerg Emergency: system is unusable # A "panic" condition - notify all tech staff on call? (Earthquake? Tornado?) - affects multiple apps/servers/sites.
1 alert Alert: action must be taken immediately # Should be corrected immediately - notify staff who can fix the problem - example is loss of backup ISP connection.
2 crit Critical: critical conditions # Should be corrected immediately, but indicates failure in a primary system - fix CRITICAL problems before ALERT - example is loss of primary ISP connection.
3 err Error: error conditions # Non-urgent failures - these should be relayed to developers or admins; each item must be resolved within a given time.
4 warning Warning: warning conditions # Warning messages - not an error, but indication that an error will occur if action is not taken, e.g. file system 85% full - each item must be resolved within a given time.
5 notice Notice: normal but significant condition # Events that are unusual but not error conditions - might be summarized in an email to developers or admins to spot potential problems - no immediate action required.
6 info Informational: informational messages # Normal operational messages - may be harvested for reporting, measuring throughput, etc. - no action required.
7 debug Debug: debug-level messages # Info useful to developers for debugging the app, not useful during operations.
FACILITY
0 kern kernel messages
1 user user-level messages
2 mail mail system
3 daemon system daemons
4 auth security/authorization messages
5 syslog messages generated internally by syslogd
6 lpr line printer subsystem
7 news network news subsystem
8 uucp UUCP subsystem
9 cron clock daemon
10 authpriv security/authorization messages
11 FTP daemon
12 NTP subsystem
13 log audit
14 log alert
15 clock daemon
16 local0 local use 0
17 local1 local use 1
18 local2 local use 2
19 local3 local use 3
20 local4 local use 4
21 local5 local use 5
22 local6 local use 6
23 local7 local use 7
BALANCER
# balancer
#$template balancer,"%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag% %msg%\n"
#$template balancer,"%$DAY%-%$MONTH%-%$YEAR% %HOSTNAME% %syslogtag% %msg%\n"
#$template BALANCER,"%TIMESTAMP:::date-rfc3339% %HOSTNAME% %syslogtag% %msg%\n"
#$template _balancer,"%timegenerated% %HOSTNAME% %syslogtag% %msg%\n"
$template _balancer,"%timegenerated:::date-mysql% %timereported:::date-subseconds% %HOSTNAME% %syslogtag% %msg%\n"
# local7._balancer /var/log/balancer/_balancerd.log;_balancer
#local7._worker /var/log/balancer/_worker.log;_balancer
#if $msg startswith 'Mar' and $syslogpriority-text == 'err' then /var/log/_balancer/_balancerd.error;_balancer
if $msg contains '_balancerd' and $syslogpriority-text == 'err' then /var/log/_balancer/_balancerd.error
#:msg, regex, "_balancerd" /var/log/_balancer/_balancerd.error
#:rawmsg, regex, "bal" /var/log/_balancer/_balancerd.error
#:rawmsg, regex, "^Mar" /var/log/_balancer/_balancerd.error
#:msg, contains, "_balancerd" /var/log/_balancer/_balancerd.error
#:syslogpriority, isequal, "err" \
#/var/log/_balancer/_balancerd.error;_balancer
BALANCER
#local7.debug /var/log/_balancer.debug;balancer
$template localhost,"%timegenerated% %HOSTNAME% %syslogtag% %programname%%msg%\n" # default
$template balancer,"%timegenerated:::date-mysql% %timereported:::date-subseconds% %msg%\n"
template(name="TEST" type="string" string="%timegenerated% -HOSTNAME=%HOSTNAME% -syslogtag=%syslogtag% -programname=%programname% -syslogfacility=%syslogfacility% -syslogfacility-text=%syslogfacility-text% -syslogseverity=%syslogseverity% -syslogseverity-text=%syslogseverity-text% -syslogpriority=%syslogpriority% -syslogpriority-text=%syslogpriority-text% -inputname=%inputname% -app-name=%app-name% -procid=%procid% -msgid=%msgid% %msg%")
BIND
#:msg, regex, "balancerd" /var/log/balancer/balancerd.err
#& stop
#:rawmsg, regex, "bal" /var/log/_balancer/_balancerd.err; template & stop
#:msg, contains, "_balancerd" /var/log/_balancer/_balancerd.err
#:syslogpriority, isequal, "err" \
#/var/log/_balancer/_balancerd.err;_balancer
############################## TEMPLATE
#
# local7.debug /var/log/_balancer.debug;balancer
# $template balancer,"%timegenerated:::date-mysql% %timereported:::date-subseconds% %HOSTNAME% %syslogtag% %programname% %msg%\n"
################ MESSAGE
#
# datetime
# template(name="TIME" type="string" string="%timegenerated:::date-mysql%.%timereported:::date-subseconds% %programname%%msg%\n")
#
# default
# template(name="DEFAULT" type="string" string="%timegenerated% %HOSTNAME% %syslogtag% %programname%%msg%\n")
#
# all properties
# template(name="ALL" type="string" string="%timegenerated% -HOSTNAME=%HOSTNAME% -syslogtag=%syslogtag% -programname=%programname% -syslogfacility=%syslogfacility% -syslogfacility-text=%syslogfacility-text% -syslogseverity=%syslogseverity% -syslogseverity-text=%syslogseverity-text% -syslogpriority=%syslogpriority% -syslogpriority-text=%syslogpriority-text% -inputname=%inputname% -app-name=%app-name% -procid=%procid% -msgid=%msgid% -fromhost=%fromhost% -fromhost-ip=%fromhost-ip% %msg%\n")
$template APACHE,"%msg%\n"
############### FILE
#
# file template to agregate virtualhost logs with unique & global containers view
# $template DYNFILE,"/var/lib/vz/log/apache2/%programname%.%syslogseverity-text%"
# file template to separate virtualhost logs by containers Ip provenance
# $template DYNFILE,"/var/lib/vz/log/%fromhost-ip%/apache2/%programname%.%syslogseverity-text%"
# WARNNING: if you change the path, you must have to adjust parameters in fail2ban & logrotate config file
$template DYNFILE,"S_HOSTING_PATH_LOG/apache2/%programname%.%syslogseverity-text%"
############################## BIND
#
# examples:
# :msg, contains, "localhost" /var/log/apache2/localhost.log; localhost & stop
# :msg, regex, "balancerd" /var/log/balancer/balancerd.err
# :rawmsg, regex, "bal" /var/log/_balancer/_balancerd.err
# :msg, contains, "_balancerd" /var/log/_balancer/_balancerd.err
# :syslogpriority, isequal, "err" \
# & stop
# if $msg == '' and $syslogpriority-text == 'info' then /var/log/apache2/localhost.log; localhost
# if $programname == 'apache2' and $syslogpriority-text == 'info' then /var/log/apache2/localhost.log; balancerd
:syslogtag, contains, "apache" -?DYNFILE; APACHE & stop
# WARNNING: if you change the path, you must have to adjust parameters in fail2ban & logrotate config file
:syslogtag, contains, "apache" -S_HOSTING_PATH_LOG/apache2/others.log; APACHE & stop
lets you install, configure, refresh and remove snaps. Snaps are packages that work across many different Linux distributions, enabling secure delivery and operation of the latest apps and utilities
list
list installed package with last revision
snap list
-all # list installed package with all resions
info
shows detailed information about snaps
snap info <snap>
--color=[auto|never|always] # Use a little bit of color to highlight some things. (default: auto)
--unicode=[auto|never|always] # Use a little bit of Unicode to improve legibility. (default: auto)
--abs-time # Display absolute times (in RFC 3339 format). Otherwise, display relative times up to 60 days, then YYYY-MM-DD
--verbose # Include more details on the snap (expanded notes, base, etc.)
find
find package with his name
snap find <snap>
remove
remove package with all revisions
sudo snap remove <snap>
--revision $REV # remove package with for only a revision
purge disabled
snap list --all | grep disabled$ | awk '{ print $1" "$3 }' | xargs -l bash -c 'sudo snap remove $0 --revision $1'
CLIENT
endline options
\g # formated list, is default option ;
\G # inline list
command options
# the MariaDB command-line tool
mysql [options] db_name
# output
-B, --batch # Print results using tab as the column separator, with each row on a new line
-E, --vertical # Print query output rows vertically (one line per column value)
-H, --html # Produce HTML output
-L, --skip-line-numbers # Do not write line numbers for errors
-N, --skip-column-names # Do not write column names in results
-r, --raw # For tabular output, the “boxing” around columns enables one column value to be distinguished from another
-s, --silent # Silent mode. Produce less output
-t, --table # Display output in table format
-v, --verbose # Verbose mode
-X, --xml # Produce XML output
--show-warnings # Cause warnings to be shown after each statement if there are any
--line-numbers # Write line numbers for errors
-# ., --debug[=debug_options] # Write a debugging log. A typical debug_options string is 'd:t:o,file_name'. The default is 'd:t:o,/tmp/mysql.trace'
--binary-mode # By default, ASCII '\0' is disallowed and '\r\n' is translated to '\n'
# connection
-D ., --database=db_name # The database to use
-h ., --host=host_name # Connect to the MariaDB server on the given host
-P ., --port=port_num # The TCP/IP port number to use for the connection or 0 for default to
-u ., --user=user_name # The MariaDB user name to use when connecting to the server
# used
-c, --comments # Whether to preserve comments in statements sent to the server
-C, --compress # Compress all information sent between the client and the server if both support compression
-e ., --execute=statement # execute statement
-f, --force # Continue even if an SQL error occurs
-n, --unbuffered # Flush the buffer after each query
-w, --wait # If the connection cannot be established, wait and retry instead of aborting
-V, --version # Display version information and exit
--delimiter=str # Set the statement delimiter. The default is the semicolon character (“;”)
--tee=file_name # Append a copy of output to the given file
--max-allowed-packet=num # Set the maximum packet length to send to or receive from the server. (Default value is 16MB, largest 1GB.)
# others
--abort-source-on-error # Abort 'source filename' operations in case of errors
--auto-rehash # Enable automatic rehashing
--auto-vertical-output # Automatically switch to vertical output mode if the result is wider than the terminal width
--character-sets-dir=path # The directory where character sets are installed
--column-names # Write column names in results
--column-type-info, -m # Display result set metadata
--connect-timeout=seconds # Set the number of seconds before connection timeout
--debug-check # Print some debugging information when the program exits
--debug-info, -T # Prints debugging information and memory and CPU usage statistics when the program exits
--default-auth=name # Default authentication client-side plugin to use
--default-character-set=charset_name # Use charset_name as the default character set for the client and connection
--defaults-extra-file=filename # Set filename as the file to read default options from after the global defaults files has been read
--defaults-file=filename # Set filename as the file to read default options from, override global defaults files
--defaults-group-suffix=suffix # In addition to the groups named on the command line, read groups that have the given suffix
--disable-named-commands # Disable named commands
-i, --ignore-spaces # Ignore spaces after function names
--init-command=str # SQL Command to execute when connecting to the MariaDB server
--local-infile[={0|1}] # Enable or disable LOCAL capability for LOAD DATA INFILE
--max-join-size=num # Set the automatic limit for rows in a join when using --safe-updates. (Default value is 1,000,000.)
-G, --named-commands # Enable named mysql commands. Long-format commands are allowed, not just short-format commands
--net-buffer-length=size # Set the buffer size for TCP/IP and socket communication. (Default value is 16KB.)
-A, --no-auto-rehash # This has the same effect as --skip-auto-rehash
-b, --no-beep # Do not beep when errors occur
--no-defaults # Do not read default options from any option file
-o, --one-database # Ignore statements except those those that occur while the default database is the one named on the command line
--pager[=command] # Use the given command for paging query output
--password[=password], -p[password] # The password to use when connecting to the server
-W, --pipe # On Windows, connect to the server via a named pipe
--plugin-dir=dir_name # Directory for client-side plugins #
--print-defaults # Print the program argument list and exit
--progress-reports Get progress reports for long running commands (such as ALTER TABLE)
--prompt=format_str # Set the prompt to the specified format
--protocol={TCP|SOCKET|PIPE|MEMORY} # The connection protocol to use for connecting to the server
--quick, -q # Do not cache each query result, print each row as it is received
--reconnect # If the connection to the server is lost, automatically try to reconnect
-U, --safe-updates, --i-am-a-dummy # Allow only those UPDATE and DELETE statements that specify which rows to modify by using key values
--secure-auth # Do not send passwords to the server in old (pre-4.1.1) format
--select-limit=limit # Set automatic limit for SELECT when using --safe-updates. (Default value is 1,000.)
--server-arg=name # Send name as a parameter to the embedded server
--sigint-ignore # Ignore SIGINT signals
--skip-auto-rehash # Disable automatic rehashing
--socket=path, -S path # For connections to localhost, the Unix socket file to use
# ssl connection
--ssl # Enable SSL for connection (automatically enabled with other flags)
--ssl-ca=name # CA file in PEM format
--ssl-capath=name # CA directory
--ssl-cert=name # X509 cert in PEM format
--ssl-cipher=name # SSL cipher to use
--ssl-key=name # X509 key in PEM format
--ssl-crl=name # Certificate revocation list
--ssl-crlpath=name # Certificate revocation list path
--ssl-verify-server-cert # Verify server's "Common Name" in its cert against hostname used when connecting
MySQL Commands
mysql sends each SQL statement that you issue to the server to be executed. There is also a set of commands that mysql itself interprets. For a list of these commands, type help or \h at the mysql> prompt:
List of all MySQL commands:
Note that all text commands must be first on line and end with ';'
? (\?) Synonym for `help'
clear (\c) Clear command
connect (\r) Reconnect to the server. Optional arguments are db and host
delimiter (\d) Set statement delimiter
edit (\e) Edit command with $EDITOR
ego (\G) Send command to mysql server, display result vertically
exit (\q) Exit mysql. Same as quit
go (\g) Send command to mysql server
help (\h) Display this help
nopager (\n) Disable pager, print to stdout
notee (\t) Don't write into outfile
pager (\P) Set PAGER [to_pager]. Print the query results via PAGER
print (\p) Print current command
prompt (\R) Change your mysql prompt
quit (\q) Quit mysql
rehash (\#) Rebuild completion hash
source (\.) Execute an SQL script file. Takes a file name as an argument
status (\s) Get status information from the server
system (\!) Execute a system shell command
tee (\T) Set outfile [to_outfile]. Append everything into given outfile
use (\u) Use another database. Takes database name as argument
charset (\C) Switch to another charset. Might be needed for processing binlog with multi-byte charsets
warnings (\W) Show warnings after every statement
nowarning (\w) Don't show warnings after every statement
For server side help, type 'help contents'
option examples
# remove title and format
mysql -uroot -p$pwd -sBe "$cmd"
# upgrade databases
mysql_upgrade -uroot -p$pwd
SQL
create database
CREATE DATABASE $db_name CHARACTER SET = 'utf8' COLLATE = 'utf8_unicode_ci';
# show tables by engine
SELECT ENGINE, COUNT(*) AS count FROM INFORMATION_SCHEMA.TABLES GROUP BY ENGINE;
# show log variables
SHOW VARIABLES LIKE '%log%';
collation
https://mariadb.com/kb/en/supported-character-sets-and-collations/
https://stackoverflow.com/questions/766809/whats-the-difference-between-utf8-general-ci-and-utf8-unicode-ci
Change collation for database $db to 'utf8mb4_general_ci', utf8-unicode-ci is more exact but longer
ALTER DATABASE $db COLLATE = 'utf8mb4_general_ci';
privileges
SELECT user,host,password FROM mysql.user WHERE Host <> 'localhost'; # show users
SHOW GRANTS; # show grants
SHOW PRIVILEGES\G; # show privileges
GRANT ALL PRIVILEGES ON *.* TO $user@'$host' IDENTIFIED BY '$pwd' WITH GRANT OPTION; # gives all privileges with grant option to user
GRANT USAGE ON *.* TO `$user`@`%` IDENTIFIED BY '$pwd';
GRANT SELECT, INSERT, UPDATE, DELETE ON `db_name`.* TO `$user`@`%`;
GRANT PROXY ON ''@'%' TO 'root'@'$host' with GRANT OPTION; # gives proxy privileges with grant option to user
REVOKE ALL PRIVILEGES, GRANT OPTION FROM $user; # revoke all privileges for user
DROP USER IF EXISTS $user1, $user2; # drop users
DUMP
dump all databases in separates files
[ -z "${pwd}" ] && echo -n 'pwd: ' && read pwd
path="/var/share/mariadb/default/dump-$(grep $HOSTNAME /etc/hosts|cut -d' ' -f1)-$(date +%s)"
! [ -d "${path}" ] && mkdir -p "${path}"
for db in $(mysql -u$db_user -p$db_pwd -Bse "SHOW SCHEMAS"); do
[[ $db =~ _schema$ ]] && opt='--skip-lock-tables' || opt=
echo "$db"; mysqldump -uroot -p$pwd $db --no-data $opt > "${path}/${db}.sql"
mysqldump -uroot -p${pwd} ${db} --no-create-info ${opt} | gzip -c > "${path}/${db}-data.sql.gz"
done
echo -e "\e[0;33m$path\e[0;0m"
ls -al "${path}"
data & structure
mysqldump -uroot -p$pwd $db_name --no-data > ${file}.sql # dump only structure
mysqldump -uroot -p$pwd $db_name --no-create-info --ignore-table='$tables' > ${file}-data.sql # dump only data
mysqldump -u$user -p$pwd $db_name --dump-slave --no-data > "${path2}/${file}-struct.sql" # dump only structure for slave in replication
mysqldump -u$user -p$pwd $db_name --dump-slave --no-create-info | gzip -c > "${path2}/${file}.sql.gz" # dump only data for slave replication
SSL
https://www.cyberciti.biz/faq/how-to-setup-mariadb-ssl-and-secure-connections-from-clients/
IPTABLES
iptables -nvL --line-number # show rules with line number
iptables -nvL -t nat --line-number # show rules with line number for NAT table
iptables -nvL -t mangle --line-number # show rules with line number for MANGLE table
iptables -S # show source command of rules
chain
iptables -L $CHAIN -v -n --line-numbers # list rules for chain $CHAIN
iptables -D $CHAIN $LINE_NUMBER # delete rule $LINE_NUMBER for chain $CHAIN
iptables -t $TABLE -D $CHAIN $LINE_NUMBER # delete rule $LINE_NUMBER for chain $CHAIN and specific table
FAIL2BAN
fail2ban client
fail2ban-client status # get global status
fail2ban-client status $jail # get ssh status
fail2ban-client get loglevel # get log level
fail2ban-client get setlogvel $level # set level for log
fail2ban-client get $jail ignoreip # get ignore ip for $jail
fail2ban-client set $jail addignoreip IP # add ignore ip IP for $jail
https://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certificate-authority/
https://jamielinux.com/docs/openssl-certificate-authority/create-the-root-pair.html
https://linux.die.net/man/1/req
COMMON
configuration
Generate certificate for few domains, create a specific configuration file $file_conf with:
uncomment : req_extensions = v3_req (in the [ req ] section)
add in [ v3_req ] section : subjectAltName=DNS:smtp.${domain},DNS:mail.${domain},DNS:imap.${domain}
files
# for one domain and few subdomain (dovecot)
file_conf=${path_ssl}/openssl-multi-${domain}-mail.cnf
file_key=${path_ssl}/private/mail.${domain}.key
file_csr=${path_ssl}/private/mail.${domain}.csr
file_crt=${path_ssl}/certs/mail.${domain}.crt
file_pem=${path_ssl}/private/mail.${domain}.pem
# for fews domains & subdomains (postfix)
file_conf=${path_ssl}/openssl-extend-${domain}-mail.cnf
file_key=${path_ssl}/private/mail.${domain}-extend.key
file_csr=${path_ssl}/private/mail.${domain}-extend.csr
file_crt=${path_ssl}/certs/mail.${domain}-extend.crt
file_pem=${path_ssl}/private/mail.${domain}-extend.pem
ROOT AUTHORITY + CHILD CERTIFICATES
ROOT AUTHORITY
configure
for authority certificate
file_ca_key=${path_ssl}/private/rootCA-${domain}.key
file_ca_pem=${path_ssl}/certs/rootCA-${domain}.pem
create
Create the Root Key - for CN use the correct FQDN !! ex: mail.ambau.ovh & sign it:
openssl genrsa -out $file_ca_key 4096 # without password
openssl genrsa -des3 -out $file_ca_key 4096 # with password
# Self-sign the certificate
openssl req -x509 -new -nodes -key $file_ca_key -sha256 -days 3650 -out $file_ca_pem
CHILD - once per device
configure
data
domain=ambau.ovh
path_ssl=/var/share/mail/default/ssl
# for fews domains & subdomains (postfix)
file_conf=${path_ssl}/openssl-extend-${domain}-mail.cnf
file_key=${path_ssl}/private/mail.${domain}-extend.key
file_csr=${path_ssl}/private/mail.${domain}-extend.csr
file_crt=${path_ssl}/certs/mail.${domain}-extend.crt
file_pem=${path_ssl}/private/mail.${domain}-extend.pem
configuration
generate certificate for few domains, create a specific conf file $file_conf with:
-
in the '[ req ]' section uncomment:
req_extensions = v3_req () -
add in '[ v3_req ]' section :
subjectAltName=DNS:smtp.${domain},DNS:mail.${domain},DNS:imap.${domain}
create
# Create the key
openssl genrsa -out $file_key 2048
# Create the Certificate Signing Request CSR - for CN use the correct FQDN !! ex: mail.ambau.ovh
openssl req -new -key $file_key -out $file_csr -config $file_conf
# verify configuration of CSR
openssl req -text -noout -in $file_csr
# Self-sign the certificate the CSR
openssl x509 -req -days 1460 -sha256 -in $file_csr -CA $file_ca_pem -CAkey $file_ca_key -CAcreateserial -out $file_crt -extensions v3_req -extfile $file_conf
# Create pem file
cat $file_crt $file_key > $file_pem
rights
chmod 600 ${path_ssl}/private
chmod 644 -R ${path_ssl}/certs
find ${path_ssl}/private -type f -exec chmod 0400 {} \;
find ${path_ssl}/certs -type f -exec chmod 0444 {} \;
SIMPLE certificate
# create certificat & keyfile, 1095 days
openssl req -x509 -newkey rsa:2048 -keyout mydomain.key -out mydomain.crt -days 1095
# create certificat & keyfile for postfix, 3650 days
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout /etc/ssl/private/mydomain.key -out /etc/ssl/certs/mydomain.pem
TEST SSL
starttls
telnet ${domain} 25
telnet ${domain} 587
openssl s_client -starttls smtp -connect ${domain}:25
openssl s_client -starttls smtp -connect ${domain}:587
openssl s_client -starttls smtp -connect ${domain}:143
openssl s_client -starttls smtp -connect ${domain}:110
openssl s_client -tls1_2 -servername host -connect 203.0.113.15:443
ssl
openssl s_client -connect ${domain}:465
openssl s_client -connect ${domain}:993 -showcerts # imap 993
show expiration date for certificate
domain="ambau.ovh"
# for mail connection
echo | openssl s_client -connect mx.${domain}:25 -starttls smtp | openssl x509 -noout -dates
# for ftp connection
echo | openssl s_client -connect ftp.${domain}:21 -starttls ftp | openssl x509 -noout -dates
client connection to imaps
openssl s_client -connect mx.${domain}:993
993:a logout
993:quit
ENCODE FILE
encode & compress file
tar -czf - $FILE | openssl enc -e -aes256 -out $FILE.tar.gz
openssl enc -d -aes256 -in $FILE.tar.gz | tar xz -C $PATH
encode file
openssl enc -e -aes-256-cbc -in /root/.mariadb -pass 3667gaz > /root/.mariadb.enc
openssl enc -in /root/.mariadb.enc -d -aes-256-cbc -pass stdin > /root/.mariadb
TUNNEL
backgroung tunnel
ssh -fNC [USER_HOST]@[IP_HOST] -p [PORT_HOST] -L [PORT_LISTENING]:[HOST_REMOTE_MYSQL]:[PORT_REMOTE_MYSQL]
backgroung tunnel with socket
# create tunnel
ssh -MS $SOCKET -fnNT -L 50000:localhost:3306 $USER@$HOST
# check connection
ssh -S $SOCKET -O check $USER@$HOST
# exit connection
ssh -S $SOCKET -O exit $USER@$HOST
# ctrl_cmd : check forward cancel stop exit
ssh autoclosing after command
ssh -f -o ExitOnForwardFailure=yes -L 3306:localhost:3306 sleep 10
mysql -e 'SHOW DATABASES;' -h 127.0.0.1
ssh -> ssh -> ssh
spawn ssh usr1@IP1 ssh usr2@IP2 $CMD # CMD is the command you wanna execute on B2
expect "password"
send "PWD_USER1\n"
expect "password"
send "PWD_USER2\n"
expect eof
exit
TUNNEL ControlMaster
ssh master control (not so efficient...)
~/.ssh/config
ControlMaster auto
ControlPath ~/.ssh/control:%h:%p:%r
sshfs
user="root"
ip="91.121.112.140"
path_base="/mnt/sshfs"
path_remote="vz/share"
path_local=$path_base/$path_remote
[ ! -d "$path_local" ] && sudo mkdir -p "$path_local"
sudo chown 1000:1000 -R "$path_local"
sshfs -o reconnect -o big_writes ${user}@${ip}:/"$path_remote" "$path_local"
ll "$path_local"
umount "$path_local" && sudo rm -fR "${path_base}/${pathremote%%/*}"
examples
# rspamd
ssh -fN root@91.121.112.140 -L 8080:10.0.0.180:11334
TUNNELING mysql with ssl & keep alive tunnel without command
ssh -fNC [user_host]@[ip_host] -p [port_host] -L [port_listening]:[host_remote_mysql]:[port_remote_mysql]
# connect mysql
mysql -h 127.0.0.1 -u [USER_REMOTE_MYSQL] -p '[PASSWORD_REMOTE_MYSQL]' -P [PORT_LISTENING]
# look ssh connections
lsof -i -n | grep ssh
# kill ssh process
kill [PID_PROCESS]
kill $(lsof -i -n |grep ssh |grep LISTEN| xargs| awk '{print $2}')
examples
ssh -fNC root@91.121.112.140 -L 3306:10.0.0.120:3306
mysql -h 127.0.0.1 -u roothost -p '[$mysql_pwd]' -P 3306
ssh -fNC root@91.121.112.140 -p 20120 -L 3333:localhost:3306
mysql -h 127.0.0.1 -u root -p '[$mysql_pwd]' -P 3333
TUNNELING mysql with ssl with a mysql client command
ssh -f [user_host]@[ip_host] -p [port_host] -L [port_listening]:[host_remote_mysql]:[port_remote_mysql] sleep 5; \
mysql -h 127.0.0.1 -u [user_remote_mysql] -p '[password_remote_mysql]' -P [port_listening]
UTILITIES
# list of ssh connection
netstat -n --protocol inet | grep ':22'
# launch remote command with local file
ssh $USER@$HOST $COMMAND < $FILE
grep, egrep, fgrep - print lines that match patterns
grep [OPTION...] PATTERNS [FILE...]
grep [OPTION...] -e PATTERNS ... [FILE...]
grep [OPTION...] -f PATTERN_FILE ... [FILE...]
# used
-i, --ignore-case # Ignore case distinctions, so that characters that differ only in case match each other.
-v, --invert-match # Invert the sense of matching, to select non-matching lines.
-e PATTERNS, --regexp=PATTERNS # can be used multiple times. This option can be used to protect a pattern beginning with “-”.
-f FILE, --file=FILE # Obtain patterns from FILE, one per line, it can be used multiple times.
-E, --extended-regexp # Interpret PATTERNS as extended regular expressions
-x, --line-regexp # Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $.
# General Output Control
-c, --count # instead print a count of matching lines for each input file. With the -v, --invert-match option
--color[=WHEN], --colour[=WHEN] # Surround the matched (non-empty) strings, matching lines, ... The colors are defined by the environment variable GREP_COLORS
-L, --files-without-match # instead print the name of each input file from which no output would normally have been printed. The scanning will stop on the first match
-l, --files-with-matches # instead print the name of each input file from which output would normally have been printed. The scanning will stop on the first match
-m NUM, --max-count=NUM # Stop reading a file after NUM matching lines.
-o, --only-matching # Print only the matched (non-empty) parts of a matching line, with each such part on a separate
output line
-q, --quiet, --silent # Quiet; do not write anything to standard output.
-s, --no-messages # Suppress error messages about nonexistent or unreadable files
# Output Line Prefix Control
-H, --with-filename # Print the file name for each match. This is the default when there is more than one file to search
-h, --no-filename # Suppress the prefixing of file names on output. This is the default when there is only one file
(or only standard input) to search.
-n, --line-number # Prefix each line of output with the 1-based line number within its input file.
# Context Line Control
-A NUM, --after-context=NUM # Print NUM lines of trailing context after matching lines.
-B NUM, --before-context=NUM # Print NUM lines of leading context before matching lines.
-C NUM, -NUM, --context=NUM # Print NUM lines of output context.
examples
# return file name matched with 2 conditions
grep -l 'rotate 4' /etc/logrotate.d/* | xargs grep -l 'daily' -
grep -l -e 'rotate 4' -e 'daily' /etc/logrotate.d/*
grep -l $pattern $files | xargs sed -i $pattern # substitute str in matched file
grep -B 2 $pattern file # print the matching line & the 2 lines before
grep -A 5 $pattern file # print the matching line & the 5 lines after
grep -C 3 $pattern file # print the matching line & the 3 lines arround
grep -c 4 $pattern file # print only 5 matched line per file