태스크포스 (task force)


- 프로젝트 팀(project team)이라고도 한다. 태스크포스는 각 전문가간의 커뮤니케이션과 조정을 쉽게 하고, 

밀접한 협동관계를 형성하여 직위의 권한보다도 능력이나 지식의 권한으로 행동하여 성과에 대한 책임도 명확하고 행동력도 가지고 있다. 

일정한 성과가 달성되면 그 조직은 해산되고, 환경변화에 적응하기 위한 그 다음 과제를 위하여 새로운 태스크포스가 편성되어 조직 전체가 

환경변화에 대해 적응력 있는 동태적 조직의 성격을 가진다. 태스크포스는 시장이나 기술 등의 환경변화에 대해서 적응력을 갖는 

조직형태일 뿐만 아니라, 새로운 과제에의 도전.책임감.달성감.단결심 등을 경험하는 기회를 구성원들에게 제공하고, 

구성원의 직무만족을 높이는 효과가 있다.


R&D (Research and development)


연구개발 또는 간단히 R&D(Research and development)는 경제 협력 개발 기구에 따르면 "인간, 문화, 사회의 지식을 비롯한 지식을 증강하기 

위한 창조적인 일이자 새로운 응용물을 고안하기 위한 지식의 이용"을 가리킨다.

연구개발은 과학적이거나 특정한 기술 개발 지향적이며, 또 간헐적으로 기업, 정부 활동으로 수행되기도 한다.

'TF' 카테고리의 다른 글

[Research] django high availability  (0) 2017.06.28
[Research] openstack  (0) 2017.06.27
[Research] Quota/Spike difference  (0) 2017.06.27
[Research] 스파이크( spike)  (0) 2017.06.27
[Research] API (Application Programming Interface)  (0) 2017.06.27

장고앱을 만드는 순서가 좀 애매한 부분이 있다 ...


"urls 패턴정의가 먼저냐 뷰생성이 먼저냐 ?" 인데...


urls 패턴을 정의하려면 뷰이름이 필요하고 

뷰를 만들고 호출하려면 urls 패턴이 먼저 정의 되야하고... 


순서가 애매하다 보니 자주 실수 하는것이 urls 패턴에 정의된 뷰이름과 실제 작성한 뷰함수이름이 틀려서 에러가 발생하거나 

뷰함수만 작성하고 urls 패턴을 정의 하지 않아서 페이지가 안보이는 경우가 있다. 약간의 삽질일 수 있겠다.

 

그래서 장고가 익숙하지 않는 사람이라면(나) 빈페이지라도 동작 할 수 있는 구조를 만들어 페이지 열림을 확인하고 

나머지 코딩을 진행해하는 것이 좋다.


1. 일단 글목록 뷰 테스트를 위한 앱을 하나 생성하자

(개인적으로다가 manager.py 에서 startapp param 으로 인덱스뷰이름도 추가적으로 받아서 자동으로 views.py, urls.py 파일을 만들어 줬음 좋겠다 -_-;)

]# manager.py startapp catalog index 

위처럼 커맨드를 입력하면 아래처럼 수행되게...


]# cd /home/myproject
]# /usr/local/python3.4/bin/python3 manage.py startapp catalog
]# tree /home/myproject
/home/myproject
├── myproject
│?? ├── __init__.py
│?? ├── settings.py
│?? ├── urls.py
│?? └── wsgi
├── catalog
│?? ├── __init__.py
│?? ├── admin.py
│?? ├── apps.py
│?? ├── migrations
│?? │?? ├── __init__.py
│?? ├── models.py
│?? ├── tests.py
│?? └── views.py
└── manage.py

]# cat <<EOF > catalog/urls.py
from django.conf.urls import url
from .views import index

urlpatterns = [
    url(r'^$', index),
]
EOF

]# cat <<EOF > catalog/views.py
from django.shortcuts import render
from django.http import HttpResponse

def index(request):
    return HttpResponse("Hello world")
EOF


2. catalog 앱을 프로젝트에서 인식 할 수 있도록 INSTALLED_APPS 에 추가한다.

]# vi myproject/settings.py
INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'catalog',
]


3. catalog 앱에서 사용할 catalog/urls.py 파일을 프로젝트 myproject/urls.py 에 include 해준다

]# vi myproject/urls.py
from django.conf.urls import url, include
from django.contrib import admin

urlpatterns = [
    url(r'^admin/', admin.site.urls),
    url(r'^catalog/', include('catalog.urls')),
]

/usr/local/python3.4/bin/python3 manage.py runserver 0.0.0.0:8080

   

4. django 를 실행하고  페이지 열림을 확인한다.

http://127.0.0.1:8080/catalog/



5. 리스트페이지에서 보여줄 모델을 작성해보자

]# cat <<EOF > catalog/models.py
from django.db import models

class GuestBook(models.Model):

    auth = models.CharField(max_length=20)
    title = models.CharField(max_length=100)
    content = models.TextField()
    created_at = models.DateTimeField(auto_now_add=True)
    updated_at = models.DateTimeField(auto_now=True)

    def __str__(self):
        return self.title

    class Meta:
        ordering = ['-id']
EOF

]# /usr/local/python3.4/bin/python3 manage.py makemigrations catalog
]# /usr/local/python3.4/bin/python3 manage.py migrate


6. GuestBook 모델에 데이터를 넣어보자

/usr/local/python3.4/bin/python3 manage.py shell
>>> from catalog.models import GuestBook
>>> for i in range(1,1001):
...     GuestBook.objects.create(title='{} 째글입니다'.format(i), auth='관리자', content='{} 째 글입니다'.format(i))
>>> GuestBook.objects.all()


7. GuestBook 모델을 갖고 오는 뷰와 템플릿을 작성해보자.

cat <<EOF > catalog/views.py
> from django.shortcuts import render
> from django.http import HttpResponse
> from .models import GuestBook
> def index(request):
>     guest_book_list = GuestBook.objects.all()
>     return render(request, 'catalog/index.html', {'guest_book_list': guest_book_list})
> EOF

cat <<EOF > catalog/templates/catalog/index.html 
<!DOCTYPE html>
<html lang="en">
<head>
  <title>GuestBook Example</title>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
  <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
</head>
<body>

<div class="container">
  <h2>GuestBook</h2>
  <ul class="list-group">
    {% for guest_book in guest_book_list %}
      <li class="list-group-item">{{ guest_book.title }}</li>
    {% endfor %}
  </ul>
</div>

</body>
</html>
EOF



8. 위그림처럼 모든 게시물이 쭉 출력이 되므로 django의 Paginator 를 사용하여 페이징을 붙여 보자.

# catalog/views.py
from django.shortcuts import render
from django.http import HttpResponse
from .models import GuestBook
from django.core.paginator import Paginator

def index(request):
    """
    GET 방식으로 page 변수를 받는다.
    """
    page = request.GET.get('page''1')
    if len(page) == 0: page = '1'
    if type(page) != int : page = int(page)

    """
    guestbook queryset 얻는다.

    """

    guest_book_list = GuestBook.objects.all()

    """
    guest_book_list 를 10개씩 paging 한다.
    """
    num = 10
    guest_book_paginator = Paginator(guest_book_list, num)
    guest_book_page = guest_book_paginator.page(page)

    """
    현재페이지가 속해있는 페이지구룹(리스트)를 구하여 페이지네비게이터로 사용한다.
    page 페이지와 제일 가까운 num 의 배수를 구하여 시작페이지인덱스를 구하고 
    시작페이지인덱스에 num 을 더하여 마지막페이지인덱스를 구한다.
    """
    for i in reversed(range(1,page+1)):
        if i%num == 1:
            start_page_index = i
            break
    end_page_index = min(start_page_index + num, max(guest_book_paginator.page_range)+1)
    page_list = range(start_page_index, end_page_index)

    """
    위 페이지네비게이터를 구하는 방법이 마음에 안든다면 간단한 수식을 이용해서도 구현할수 있다.
    참조: https://jupiny.com/2016/11/22/limit-pagination-page-numbers-range/

    max_index = len(guest_book_paginator.page_range)
    start_index = int((page - 1) / num) * num
    end_index = start_index + num
    if end_index >= max_index:
        end_index = max_index
    page_list = guest_book_paginator.page_range[start_index:end_index]
    """
    return render(request, 'catalog/index.html', {'guest_book_list': guest_book_page, 'page_list': page_list})


# catalog/templates/catalog/index.html
<!DOCTYPE html>
<html lang="en">
<head>
  <title>GuestBook Example</title>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
  <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
</head>
<body>

<div class="container">
  <h2>GuestBook</h2>
  <ul class="list-group">
    {% for guest_book in guest_book_list %}
      <li class="list-group-item">{{ guest_book.title }}</li>
    {% endfor %}
  </ul>
  <ul class="pagination">
    {% if guest_book_list.has_previous %}
      <li>
        <a href="?page={{ guest_book_list.previous_page_number }}">
          <span>Previous</span>
        </a>
      </li>
    {% else %}
      <li class="disabled">
        <a href="#">
          <span>Previous</span>
        </a>
      </li>
    {% endif %}

    {% for page in page_list %}
      <li {% if page == guest_book_list.number %}class="active"{% endif %}>
        <a href="?page={{ page }}">{{ page }}</a>
      </li>
    {% endfor %}

    {% if guest_book_list.has_next %}
      <li>
        <a href="?page={{ guest_book_list.next_page_number }}">
          <span>Next</span>
        </a>
      </li>
    {% else %}
      <li {% if not guest_book_list.has_next %}class="disabled"{% endif %}>
        <a href="#">
          <span>Next</span>
        </a>
      </li>
    {% endif %}
  </ul>
</div>

</body>
</html>


결과페이지

http://sdevvm001.cafe24.com:8080/catalog/ 

https://github.com/rspivak/sftpserver


윈도우에서도 잘 돌아감


c:\Python27\Scripts>sftpserver.exe -k id_rsa.key -l DEBUG


#-*- encoding: utf-8 -*-

import paramiko
pkey = paramiko.RSAKey.from_private_key_file('c:\Python27\Scripts\id_rsa.key')
transport = paramiko.Transport(('localhost'3373))
transport.connect(username='admin'password='admin')
sftp = paramiko.SFTPClient.from_transport(transport)
print sftp.listdir('.')
"""
output:
['loop.py', 'stub_sftp.py', 'sftpserver-script.py']
"""
""" download """
sftp.get('sftpserver-script.py''sftpserver-script.py')


https://github.com/rspivak/sftpserver/blob/master/src/sftpserver/stub_sftp.py

class StubServer (ServerInterface):

    def check_auth_password(self, username, password):

        # all are allowed

        return AUTH_SUCCESSFUL


위 check_auth_password 함수에 pyotp 적용하면 보안상 좋을듯....

https://github.com/pyotp/pyotp


python_window_daemon.py

#-*- encoding: utf-8 -*-
# Now works in python 3 aswell as 2

import sys, threading, time, os, datetime, time, inspect, subprocess, socket
from subprocess import PIPE, STDOUT
from wsgiref.simple_server import make_server

class Webshite():

    def __init__(self):
        self.hostname = socket.getfqdn()
        self.header = """
        <html>
        <header>
        <style>
        body {
            background-color:#6C7A89;
        }
        p {
            color:white;
            font-family:Consolas;
        }
        </style>
        </header>
        <body>
        """
        self.shite = "<p>"+str(self.hostname)+"</p>"

        self.footer = """
        </body>
        </html>
        """

    def grains(selfenvironstart_response):
        self.environ = environ
        self.start = start_response
        status = '200 OK'
        response_headers = [('Content-type','text/html; charset=utf-8')]
        self.start(status, response_headers)
        fullsite = self.header + self.shite + self.footer
        fullsite = [fullsite.encode('utf-8')] # in python 3, this needed to be a list, and encoded
        return fullsite

    def run(self):
        srv = make_server('127.0.0.1'8080self.grains)

        while True:
            try:
                threading.Thread(target=srv.handle_request()).start()
            except KeyboardInterrupt:
                exit()



# ------------------ terrible daemon code for windows -------------------
if __name__ == '__main__':
    webshite = Webshite()

    Windows = sys.platform == 'win32'
    ProcessFileName = os.path.realpath(__file__)
    pidName = ProcessFileName.split('\\')[-1].replace('.py','')

    if Windows:
        pidFile = 'c:\\Windows\\Temp\\'+pidName+'.pid'
    else:
        pidFile = '/tmp'+pidName+'.pid'


    def start(pidfilepidname):
        """ Create process file, and save process ID of detached process """
        pid = ""
        if Windows:
            #start child process in detached state
            DETACHED_PROCESS = 0x00000008
            p = subprocess.Popen([sys.executable, ProcessFileName, "child"],
                                    creationflags=DETACHED_PROCESS)
            pid = p.pid

        else:
            p = subprocess.Popen([sys.executable, ProcessFileName, "child"],
                                    stdout = PIPE, stderr = PIPE)
            pid = p.pid


        print("Service", pidname, pid, "started")
        # create processfile to signify process has started
        with open(pidfile, 'w'as f:
            f.write(str(pid))
        f.close()
        os._exit(0)


    def stop(pidfilepidname):
        """ Kill the process and delete the process file """
        procID = ""
        try:
            with open(pidfile, "r"as f:
                procID = f.readline()
            f.close()
        except IOError:
            print("process file does not exist, but that's ok <3 I still love you")

        if procID:
            if Windows:
                try:
                    killprocess = subprocess.Popen(['taskkill.exe','/PID',procID,'/F'],
                                                        stdout = PIPE, stderr = PIPE)
                    killoutput = killprocess.communicate()

                except Exception as e:
                    print(e)
                    print ("could not kill ",procID)
                else:
                    print("Service", pidname, procID, "stopped")

            else:
                try:
                    subprocess.Popen(['kill','-SIGTERM',procID])
                except Exception as e:
                    print(e)
                    print("could not kill "+procID)
                else:
                    print("Service "+procID + " stopped")

            #remove the pid file to signify the process has ended
            os.remove(pidfile)

    if len(sys.argv) == 2:

        if sys.argv[1] == "start":

            if os.path.isfile(pidFile) == False:
                start(pidFile, pidName)
            else:
                print("process is already started")

        elif sys.argv[1] == "stop":

            if os.path.isfile(pidFile) == True:
                stop(pidFile, pidName)
            else:
                print("process is already stopped")

        elif sys.argv[1] == "restart":
                stop(pidFile, pidName)
                start(pidFile, pidName)

        # This is only run on windows in the detached child process
        elif sys.argv[1] == "child":
            webshite.run()
    else:
        print("usage: python "+pidName+".py start|stop|restart")

#kill main
os._exit(0)


'Python' 카테고리의 다른 글

rrdmod  (0) 2017.02.03
Django + djangorestframework + django_rest_swagger 시작  (0) 2017.02.01
Pika Python AMQP Client Library  (0) 2017.01.31
s3 example  (0) 2016.12.28
daemonizing  (0) 2016.12.22

rrd로 트래픽수집을 하다보면 아래 그림처럼 갑자기 트래픽이 튀는 경우가 발생한다.



위와 같은 문제가 발생하면 rrd 디비를 dump -> 데이터수정 -> restore 를 해줘야 하는데 


상당히 귀찮은 작업이다.


그래서 스크립트 하나 만들어 봄...


1. rrd 파일을 xml 로 dump 를 받고

2. Param 으로 받은 시간대의 데이터를 0.0000000000e+00 으로 수정

3. 수정된 xml 로 restore 를 진행한다.


tip:

dump 이후에 수집된 데이터는 누락이 되는 문제가 있어 

dump 전/후 lastupdate 시간이 다를경우에 dump 후에 수집된 raw 데이터를 구하여 

update를 진행합니다. 


[root@new test]# /usr/local/python2.6/bin/python rrdmod.py -h

rrdmod.py [options]

Options:

-t, --time=<timestamp>   time format %Y%m%d%H or %Y%m%d

-f, --file=<filename>    rrd file name


코드는 아직 정리 전이라 보기에 좀 불편하겠지만 수정하여 다시 업데이트 할예정임...


import sys

import subprocess

import getopt

import datetime

import time

import re

import os


def get_rrd_lastinfo(rrd_file):

    last_time, last_input, last_output = None, None, None


    p = subprocess.Popen(

            ('rrdtool', 'lastupdate', rrd_file), stdout=subprocess.PIPE)

    out, err = p.communicate()

    for s in out.split("\n"):

        m = re.search(r"(\d+): (\d+) (\d+)", s)

        if m:

            last_time, last_input, last_output = m.groups()


    return last_time, last_input, last_output


def get_rrd_update_value(rrd_file, start_time, end_time, input, output):

    values = []

    p = subprocess.Popen(

        ('rrdtool', 'fetch', rrd_file, 'AVERAGE', '-s', start_time, '-e', end_time), stdout=subprocess.PIPE)

    out, err = p.communicate()

    for s in out.split("\n"):

        m = re.search(r"(\d+): ([\.e\+\-\d]+) ([\.e\+\-\d]+)", s)

        if m:

            last_time, last_input, last_output = m.groups()

            last_input = int(eval(last_input)*300)

            last_output = int(eval(last_output)*300)

            values.append((last_time,last_input, last_output))


    values.reverse()

    input = float(input)

    output = float(output)


    for n, v in enumerate(values):

        input -= v[1]

        output -= v[2]

        v = v + ("%d" % input,)

        v = v + ("%d" % output,)

        values[n] = v


    values.reverse()

    for v in values:

        print v

        subprocess.Popen(

            ('rrdtool', 'update', rrd_file, '%s:%s:%s' % (v[0], v[3], v[4])), stdout=subprocess.PIPE).communicate()


def modrrd(mod_time, rrd_file):

    """

    <!-- 2017-02-02 15:00:00 KST / 1486015200 --> <row><v>NaN</v><v>NaN</v></row>

    """

    pattern = re.compile(r" %s\s?[:\d]+ \w+ / (\d+) --> <row><v>(.*)</v><v>(.*)</v></row>" % mod_time)

    init_value = "0.0000000000e+00"


    last_update_time = get_rrd_lastinfo(rrd_file)[0]


    p1 = subprocess.Popen(

            ('rrdtool', 'dump', rrd_file), stdout=subprocess.PIPE)


    if os.path.exists('%s.new.rrd' % rrd_file):

        os.unlink('%s.new.rrd' % rrd_file)


    fp = open("%s.xml" % rrd_file, "a+")

    for num, line in enumerate(p1.stdout):

        matching = pattern.search(line)

        if matching:

            if len(matching.groups()) == 3:

                line = line.replace(matching.groups()[1], init_value).replace(matching.groups()[2], init_value)


        fp.write(line)

    fp.close()


    p2 = subprocess.Popen(

            ('rrdtool', 'restore', "%s.xml" % rrd_file, '%s.new.rrd' % rrd_file), stdout=subprocess.PIPE)


    out, err = p2.communicate()


    if os.path.exists("%s.xml" % rrd_file):

        os.unlink("%s.xml" % rrd_file)


    _last_update_time, _last_input, _last_output = get_rrd_lastinfo(rrd_file)


    if last_update_time != _last_update_time:

        get_rrd_update_value(rrd_file, last_update_time, _last_update_time, _last_input, _last_output)


def usage():

    print 'rrdmod.py [options]'

    print 'Options:'

    print '-t, --time=<timestamp>   time format %Y%m%d%H or %Y%m%d'

    print '-f, --file=<filename>    rrd file name'


def main():

    mod_time, rrd_file = None, None

    try:

        opts, args = getopt.getopt(sys.argv[1:],"ht:f:",["time=","file="])

    except getopt.GetoptError:

        usage()

        sys.exit(2)


    for opt, value in opts:

        if opt in ("-t", "--time"):

            mod_time = value

            if len(mod_time) == 10:

                mod_time = datetime.datetime.strptime(mod_time, '%Y%m%d%H')

                mod_time = mod_time.strftime("%Y-%m-%d %H")

            elif len(mod_time) == 8:

                mod_time = datetime.datetime.strptime(mod_time, '%Y%m%d')

                mod_time = mod_time.strftime("%Y-%m-%d")

            else:

                usage()

                sys.exit(2)


            # make timestamp

            #mod_time = time.mktime(mod_time.timetuple())


        elif opt in ("-f", "--file"):

             rrd_file = value

        elif opt == "-h":

            usage()

            sys.exit()


    if mod_time == None or rrd_file == None:

        usage()

        sys.exit()


    modrrd(mod_time, rrd_file)


if __name__ == "__main__":

    main()

'Python' 카테고리의 다른 글

sftpserver + pyotp  (0) 2017.03.08
Django + djangorestframework + django_rest_swagger 시작  (0) 2017.02.01
Pika Python AMQP Client Library  (0) 2017.01.31
s3 example  (0) 2016.12.28
daemonizing  (0) 2016.12.22

테스트환경

CentOS release 6.8 (Final)

Python 3.4.6

Django-1.10.5

djangorestframework-3.5.3

django_rest_swagger-2.1.1

MariaDB-server-10.0.29

pipenv-3.2.11


1. CentOS release 6.8 (Final)

    1-1. centos 6.8 minimal 설치

    1-2. Development tools 설치

    ]# yum update

    ]# yum groupinstall 'Development tools'


2. Python 3.4.6

    2-1. Python 3.4.6 설치

    ]# wget https://www.python.org/ftp/python/3.4.6/Python-3.4.6.tgz

    ]# tar zxf Python-3.4.6.tgz

    ]# cd Python-3.4.6

    ]# ./configure --prefix=/usr/local/python3.4 --enable-shared

    ]# make

    ]# make install

    ]# echo "/usr/local/python3.4/lib" >> /etc/ld.so.conf.d/python3.4.conf

    ]# ldconfig

    ]# /usr/local/python3.4/bin/pip3 install pipenv

    ]# ln -s /usr/local/python3.4/bin/python3 /usr/local/bin/

    ]# ln -s /usr/local/python3.4/bin/python3.4 /usr/local/bin/

    ]# ln -s /usr/local/python3.4/bin/pip3 /usr/local/bin/

    ]# ln -s /usr/local/python3.4/bin/pip3.4 /usr/local/bin/

    ]# ln -s /usr/local/python3.4/bin/pipenv /usr/local/bin/

    ]# ln -s /usr/local/python3.4/bin/virtualenv /usr/local/bin/

    

    Tip: https://www.python.org/dev/peps/pep-0513/#ucs-2-vs-ucs-4-builds

          --enable-unicode=ucs4 옵션은 CPython 2.x, 3.0 ~ 3.2 까지만 사용한다.


3. MariaDB-server-10.0.29

    3-1. MariaDB-server-10.0.29 설치

    ]# vi /etc/yum.repos.d/MariaDB.repo

    [mariadb]

    name = MariaDB

    baseurl = http://yum.mariadb.org/10.0/centos6-amd64

    gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB

    gpgcheck=1

    

    ]# yum install MariaDB-server MariaDB-client MariaDB-devel

    ]# /etc/rc.d/init.d/mysqld start

    ]# mysql -u root

    MariaDB [(none)]> CREATE DATABASE test_django CHARACTER SET UTF8;

    MariaDB [(none)]> CREATE USER django@localhost IDENTIFIED BY '비밀번호';

    MariaDB [(none)]> CREATE USER django@127.0.0.1 IDENTIFIED BY '비밀번호';

    MariaDB [(none)]> GRANT ALL PRIVILEGES ON test_django.* TO django@localhost;

    MariaDB [(none)]> GRANT ALL PRIVILEGES ON test_django.* TO django@127.0.0.1;

    MariaDB [(none)]> FLUSH PRIVILEGES;


4. Django-1.10.5, djangorestframework-3.5.3, django_rest_swagger-2.1.1, mysqlclient-1.3.9

    4-1. Django, djangorestframework, django_rest_swagger, mysqlclient 설치

    ]# mkdir /home/test-django

    ]# cd /home/test-django

    ]# pipenv install mysqlclient django djangorestframework django-rest-swagger

    

5. djangorestframework tutorial 시작

]# pipenv shell

(test-django) ]# django-admin startproject test_api

(test-django) ]# cd test_api

(test-django) ]# vi test_api/settings.py


"""

ALLOWED_HOST 수정

DATABASE 수정

"""

- ALLOWED_HOSTS = []

+ ALLOWED_HOSTS = ['*']


- DATABASES = {

-    'default': {

-        'ENGINE': 'django.db.backends.sqlite3',

-        'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),

-    }

- }


+ DATABASES = {

+    'default': {

+        'ENGINE': 'django.db.backends.mysql',

+        'NAME': 'test_django',

+        'USER': 'django',

+        'PASSWORD': '비밀번호',

+        'HOST': 'localhost',   # Or an IP Address that your DB is hosted on

+        'PORT': '3306',

+        'OPTIONS': {

+            'init_command': "SET sql_mode='STRICT_TRANS_TABLES'",

+        },

+    }

}

 


""" migrate 수행 """

(test-django) ]# ./manage.py migrate


""" admin 계정 생성 """

(test-django) ]# ./manage.py createsuperuse


""" app 추가 """

(test-django) ]# ./manage.py startapp v1


""" Post 모델 작성 """

(test-django) ]# vi v1/models.py

from django.db import models

class Post(models.Model):
   title = models.CharField(max_length=200)
   content = models.TextField()
   created_at = models.DateTimeField(auto_now_add=True)
   updated_at = models.DateTimeField(auto_now=True)

   def __str__(self):
       return "{}: {}".format(self.pk, self.title)


""" INSTALLED_APPS 에 v1, rest_framework, rest_framework_swagger 추가 """

(test-django) ]# vi test_api/settings.py

INSTALLED_APPS.append('v1')
INSTALLED_APPS.append('rest_framework')
INSTALLED_APPS.append('rest_framework_swagger')


""" v1 App의 Post 모델 생성 """

(test-django) ]# ./manage.py makemigrations v1

(test-django) ]# ./manage.py migrate v1



""" admin 페이지에서 Post 관리할 수 있도록 추가 """

(test-django) ]# vi v1/admin.py

from django.contrib import admin
from .models import *

admin.site.register(Post)


""" serializers 클래스 생성 """

(test-django) ]# vi v1/serializers.py

from rest_framework import serializers
from .models import *

class PostSerializer(serializers.ModelSerializer):
    class Meta:
        model = Post
        fields = '__all__'


""" viewset 클래스 생성 """

(test-django) ]# vi v1/views.py

from django.shortcuts import render from .models import * from .serializers import * from rest_framework import viewsets class PostViewSet(viewsets.ModelViewSet): queryset = Post.objects.all() serializer_class = PostSerializer


""" app urls 생성 및 project urls 에 추가 """

(test-django) ]# vi v1/urls.py

from django.conf.urls import include, url from .views import * from rest_framework import routers from rest_framework_swagger.views import get_swagger_view router = routers.DefaultRouter() router.register(r'post', <PostViewSet) schema_view = get_swagger_view(title='TEST API') urlpatterns = [ url(r'^', include(router.urls)),

url(r'^swagger', schema_view) , ]


from django.conf.urls import include, url

from .views import *

from rest_framework import routers

from rest_framework_swagger.views import get_swagger_view


router = routers.DefaultRouter()

router.register(r'post', PostViewSet)


schema_view = get_swagger_view(title='TEST API')


urlpatterns = [

    url(r'^', include(router.urls)),

    url(r'^swagger', schema_view),

]

 


(test-django) ]# vi test_api/urls.py

"""test_api URL Configuration

The `urlpatterns` list routes URLs to views. For more information please see:
    https://docs.djangoproject.com/en/1.10/topics/http/urls/
Examples:
Function views
    1. Add an import:  from my_app import views
    2. Add a URL to urlpatterns:  url(r'^$', views.home, name='home')
Class-based views
    1. Add an import:  from other_app.views import Home
    2. Add a URL to urlpatterns:  url(r'^$', Home.as_view(), name='home')
Including another URLconf
    1. Import the include() function: from django.conf.urls import url, include
    2. Add a URL to urlpatterns:  url(r'^blog/', include('blog.urls'))
"""
from django.conf.urls import include, url
from django.contrib import admin

urlpatterns = [url(r'^admin/', admin.site.urls),url(r'^v1/', include('v1.urls')),]


from django.conf.urls import include, url

from django.contrib import admin


urlpatterns = [

    url(r'^admin/', admin.site.urls),

    url(r'^v1/', include('v1.urls')),

]

 


""" 실행 """

(test-django) ]# ./manage.py runserver 0.0.0.0:80


""" 웹브라우져로 접속 및 테스트 """



 


기본적인 예제를 테스트 해보았고 공식 tutorial 사이트를 방문해서 단계별로 테스트를 진행해야한다...

http://www.django-rest-framework.org/#tutorial

'Python' 카테고리의 다른 글

sftpserver + pyotp  (0) 2017.03.08
rrdmod  (0) 2017.02.03
Pika Python AMQP Client Library  (0) 2017.01.31
s3 example  (0) 2016.12.28
daemonizing  (0) 2016.12.22
#!/usr/bin/env python

import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='localhost'))
channel = connection.channel()
channel.queue_declare(queue='hello'
;)

print ' [*] Waiting for messages. To exit press CTRL+C'

def callback(ch, method, properties, body):
    print " [x] Received %r" % (body,)

channel.basic_consume(callback,
                      queue='hello',
                      no_ack=True)

channel.start_consuming()



Receiving.py


#!/usr/bin/env python

import pika


connection = pika.BlockingConnection(pika.ConnectionParameters(

        host='localhost'))

channel = connection.channel()


channel.queue_declare(queue='hello')


print ' [*] Waiting for messages. To exit press CTRL+C'


def callback(ch, method, properties, body):

    print " [x] Received %r" % (body,)


channel.basic_consume(callback,

                      queue='hello',

                      no_ack=True)


channel.start_consuming()



Sending.py


#!/usr/bin/env python

import pika


connection = pika.BlockingConnection(pika.ConnectionParameters(

               'localhost'))

channel = connection.channel()

channel.queue_declare(queue='hello')

channel.basic_publish(exchange='',

                      routing_key='hello',

                      body='Hello World!')

print " [x] Sent 'Hello World!'"

connection.close()

'Python' 카테고리의 다른 글

rrdmod  (0) 2017.02.03
Django + djangorestframework + django_rest_swagger 시작  (0) 2017.02.01
s3 example  (0) 2016.12.28
daemonizing  (0) 2016.12.22
Threading  (0) 2016.12.22

참고사이트:

https://orebibou.com/2016/03/centos-7%E3%81%AB%E5%88%86%E6%95%A3%E3%82%AA%E3%83%96%E3%82%B8%E3%82%A7%E3%82%AF%E3%83%88%E3%82%B9%E3%83%88%E3%83%AC%E3%83%BC%E3%82%B8%E3%80%8Eceph%E3%80%8F%E3%82%92%E3%82%A4%E3%83%B3%E3%82%B9/

http://docs.ceph.com/docs/master/start/quick-ceph-deploy/

http://www.nminoru.jp/~nminoru/unix/ceph/how-to-use-rbd.html

https://www.redhat.com/archives/virt-tools-list/2016-January/msg00007.html


1. kvm 가상서버세팅

192.168.122.11/ceph-node-admin/CentOS Linux release 7.3.1611 (Core)

192.168.122.12/ceph-node-001/CentOS Linux release 7.3.1611 (Core) 

192.168.122.13/ceph-node-002/CentOS Linux release 7.3.1611 (Core)

192.168.122.14/ceph-node-003/CentOS Linux release 7.3.1611 (Core)


2. ceph-node-admin, ceph-node-001 ~ 003 공통작업


]# yum update

]# yum install net-tools


""" 시간동기화툴 설치 및 시작 """

]# yum install chrony

]# systemctl enable chronyd

]# systemctl start chronyd


""" SELINUX 비활성화"""

]# vi /etc/selinux/config

- SELINUX=enforcing

+ SELINUX=disabled


""" firewalld 종료 """

]# systemctl stop firewalld

]# systemctl disable firewalld


""" hostname 변경""

ceph-node-admin

]# hostnamectl set-hostname ceph-node-admin


ceph-node-001

]# hostnamectl set-hostname ceph-node-001


ceph-node-002

]# hostnamectl set-hostname ceph-node-002


ceph-node-003

]# hostnamectl set-hostname ceph-node-003


""" NetworkManager 종료 """

]# systemctl stop NetworkManager

]# systemctl disable NetworkManager


""" /etc/hosts 수정 """

]# vi /etc/hosts

+ 192.168.122.11 ceph-node-admin

+ 192.168.122.12 ceph-node-001

+ 192.168.122.13 ceph-node-002

+ 192.168.122.14 ceph-node-003


""" ceph 계정생성 """

ceph-node-001~003

]# useradd -d /home/ceph -m ceph

]# passwd ceph

]# echo -e 'Defaults:ceph !requiretty\nceph ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph

]# chmod 440 /etc/sudoers.d/ceph


ceph-node-admin

--------------------------------------------------------------------------------------------------------------

admin 에서 구지 ceph 계정을 만들어서 사용할 필요가 있을까? 그냥 root 로 사용하자.

]# useradd -d /home/ceph -m ceph

]# passwd ceph

]# echo -e 'Defaults:ceph !requiretty\nceph ALL = (root) NOPASSWD:ALL' | tee /etc/sudoers.d/ceph

]# chmod 440 /etc/sudoers.d/ceph

]# su - ceph

--------------------------------------------------------------------------------------------------------------


]# vi /root/.ssh/config

+ Host ceph-node-001

+    Hostname ceph-node-001

+    User ceph

+ Host ceph-node-002

+    Hostname ceph-node-002

+    User ceph

+ Host ceph-node-003

+    Hostname ceph-node-003

+    User ceph


]# ssh-keygen

]# ssh-copy-id ceph-node-001

]# ssh-copy-id ceph-node-002

]# ssh-copy-id ceph-node-003 


""" ceph repo 등록 ""

]# yum install http://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-1.el7.noarch.rpm


""" 재시작 """

]# reboot


3. ceph-node-admin 작업

--------------------------------------------------------------------------------------------------------------

]# yum install ceph-deploy

]# su - ceph

]$ mkdir my-cluster

]$ cd my-cluster

]$ ceph-deploy new ceph-node-001 ceph-node-002 ceph-node-003

]$ vi ceph.conf

+ public_network = 192.168.122.0/24

+ cluster_network = 192.168.122.0/24

]$ ceph-deploy install ceph-node-001 ceph-node-002 ceph-node-003 ceph-node-admin

]$ cceph-deploy mon create-initial

]$ ceph-deploy admin ceph-node-001 ceph-node-002 ceph-node-003

]$ exit

ceph-node-admin, ceph-node-001 ~ ceph-node-003

]# chmod +r /etc/ceph/ceph.client.admin.keyring

]# su - ceph

]$ ceph health detail

--------------------------------------------------------------------------------------------------------------


]# yum install ceph-deploy
]# mkdir my-cluster
]# cd my-cluster

]# ceph-deploy new ceph-node-001 ceph-node-002 ceph-node-003

]# vi ceph.conf

+ public_network = 192.168.122.0/24

+ cluster_network = 192.168.122.0/24

+ osd pool default size = 2

]# ceph-deploy install ceph-node-admin ceph-node-001 ceph-node-002 ceph-node-003

]# ceph-deploy mon create-initial

]# ceph-deploy osd create ceph-node-001:vdb --zap

]# ceph-deploy osd create ceph-node-002:vdb --zap

]# ceph-deploy osd create ceph-node-003:vdb --zap

]# ceph health detail

]# ceph osd tree;                 

ID WEIGHT  TYPE NAME              UP/DOWN REWEIGHT PRIMARY-AFFINITY 

-1 0.04376 root default                                             

-2 0.01459     host ceph-node-001                                   

 0 0.01459         osd.0               up  1.00000          1.00000 

-3 0.01459     host ceph-node-002                                   

 1 0.01459         osd.1               up  1.00000          1.00000 

-4 0.01459     host ceph-node-003                                   

 2 0.01459         osd.2               up  1.00000          1.00000 


4. ceph-node-001 에서 작업

]# rbd create vm001 --size 20000 --image-feature layering  

]# rbd ls

vm001

]# rbd info vm001

rbd image 'vm001':

        size 20000 MB in 5000 objects

        order 22 (4096 kB objects)

        block_name_prefix: rbd_data.10332ae8944a

        format: 2

        features: layering

        flags: 

]# ceph auth get-or-create client.vmimages mon 'allow r' osd 'allow rwx pool=rbd'

[client.vmimages]

        key = AQBBu4FYTDZuKBAAXnlDNpSLzkwmYo84u0I9oQ==


]# ceph auth list

installed auth entries:


osd.0

        key: AQAzuIFYPSfQIBAASzGIUb+VQKk3c48Zg5bu9Q==

        caps: [mon] allow profile osd

        caps: [osd] allow *

osd.1

        key: AQBQuIFYXS8LGBAA7qUoQwbIaEnYTboeJYrUgw==

        caps: [mon] allow profile osd

        caps: [osd] allow *

osd.2

        key: AQBmuIFYSSbJFxAAxgN1NWiJ7SOJvMl1nZll8w==

        caps: [mon] allow profile osd

        caps: [osd] allow *

client.admin

        key: AQAjtoFY6S21MhAA71nAPC3LGIXOGYi9lAB5pg==

        caps: [mds] allow *

        caps: [mon] allow *

        caps: [osd] allow *

client.bootstrap-mds

        key: AQAltoFYOnCBMhAAQ+IsKhiRQXMEj6y9nRkTmg==

        caps: [mon] allow profile bootstrap-mds

client.bootstrap-osd

        key: AQAktoFYqpAVHBAA9D4xX+DannHKJumh4LWrGQ==

        caps: [mon] allow profile bootstrap-osd

client.bootstrap-rgw

        key: AQAltoFYzZj3DRAA2a8pbGitRlaQH31z/pdTgQ==

        caps: [mon] allow profile bootstrap-rgw

client.vmimages

        key: AQBBu4FYTDZuKBAAXnlDNpSLzkwmYo84u0I9oQ==

        caps: [mon] allow r

        caps: [osd] allow rwx pool=rbd


5. kvm 에서 작업

]# echo "<secret ephemeral='no' private='no'> <usage type='ceph'> <name>client.vmimages secret</name></usage></secret>" > secret.xml

]# virsh secret-set-value 92447fe3-b22f-4e88-b07a-01ab839664d8 AQBBu4FYTDZuKBAAXnlDNpSLzkwmYo84u0I9oQ==

]# vi ceph-pool.xml

+<pool type="rbd">

+   <name>ceph-pool</name>

+  <source>

+     <name>rbd</name>

+     <host name="192.168.122.12" port="6789" />

+     <auth username='vmimages' type='ceph'>

+       <secret uuid='92447fe3-b22f-4e88-b07a-01ab839664d8'/>

+     </auth>

+   </source>

+</pool>


]# vi ceph-clinet.xml

+    <disk type='network' device='disk'>

+      <driver name='qemu' type='raw' cache='none'/>

+      <auth username='vmimages'>

+        <secret type='ceph' uuid='92447fe3-b22f-4e88-b07a-01ab839664d8'/>

+      </auth>

+      <source protocol='rbd' name='rbd/vm002'>

+        <host name='192.168.122.12' port='6789'/>

+      </source>

+      <target dev='vdb' bus='virtio'/>

+      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>

+    </disk>


virt-manager 로 생성한 rbd 는 사용이 가능하나 ceph 문제에 나온대로 아래 명령어를 사용하여 생산한 rbd 는 header 를 읽을 수 없다는 에러가 발생한다.

qemu-img create -f rbd rbd:rbd/vm003 20G


vm001, vm002 는 virt-manager 로 생성한것이고 vm003 은 위커맨드로 생성한것 


[root@ceph-node-001 home]# rbd ls

vm001

vm002

vm003

[root@ceph-node-001 home]# rbd info vm001

rbd image 'vm001':

        size 20480 MB in 5120 objects

        order 22 (4096 kB objects)

        block_name_prefix: rbd_data.116d116ae494

        format: 2

        features: layering, striping

        flags: 

        stripe unit: 4096 kB

        stripe count: 1

[root@ceph-node-001 home]# rbd info vm002

rbd image 'vm002':

        size 20480 MB in 5120 objects

        order 22 (4096 kB objects)

        block_name_prefix: rbd_data.10e0684a481a

        format: 2

        features: layering, striping

        flags: 

        stripe unit: 4096 kB

        stripe count: 1

[root@ceph-node-001 home]# rbd info vm003

rbd image 'vm003':

        size 20480 MB in 5120 objects

        order 22 (4096 kB objects)

        block_name_prefix: rbd_data.118274b0dc51

        format: 2

        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten

        flags: 


원인은 rbd의 features 때문이었음...

]# rbd create vm004 --size 20000 --image-feature layering

로 생성하고 마운트해보면 정상동작함

features 에 대한 문서를 찾아 봐야알듯.

'Virtualization' 카테고리의 다른 글

LVM2  (0) 2017.01.12
kvm install  (0) 2017.01.03

참조: http://kit2013.tistory.com/199


# dd sparse image

]# dd if=/dev/zero of=test1.img bs=10M count=0 seek=1K 

]# dd if=/dev/zero of=test2.img bs=10M count=0 seek=1K


]# ls -alh

-rw-r--r--   1 root  root   10G Jan 12 14:28 test1.img

-rw-r--r--   1 root  root   10G Jan 12 14:28 test2.img


]# du -sh *

0       test1.img

0       test2.img


# losetup 

]# losetup -a

/dev/loop0: [2051]:1316303 (/opt/stack/data/stack-volumes-default-backing-file)

/dev/loop1: [2051]:1316304 (/opt/stack/data/stack-volumes-lvmdriver-1-backing-file)


]# losetup /dev/loop2 test1.img 

[root@localhost home]# losetup -a

/dev/loop0: [2051]:1316303 (/opt/stack/data/stack-volumes-default-backing-file)

/dev/loop1: [2051]:1316304 (/opt/stack/data/stack-volumes-lvmdriver-1-backing-file)

/dev/loop2: [2050]:12 (/home/test1.img)


]# pvscan 

  PV /dev/loop1   VG stack-volumes-lvmdriver-1   lvm2 [10.01 GiB / 8.01 GiB free]

  PV /dev/loop0   VG stack-volumes-default       lvm2 [10.01 GiB / 10.01 GiB free]

  Total: 2 [20.02 GiB] / in use: 2 [20.02 GiB] / in no VG: 0 [0   ]


]# pvcreate /dev/loop2

  Physical volume "/dev/loop2" successfully created.

[root@localhost home]# pvscan 

  PV /dev/loop1   VG stack-volumes-lvmdriver-1   lvm2 [10.01 GiB / 8.01 GiB free]

  PV /dev/loop0   VG stack-volumes-default       lvm2 [10.01 GiB / 10.01 GiB free]

  PV /dev/loop2                                  lvm2 [10.00 GiB]

  Total: 3 [30.02 GiB] / in use: 2 [20.02 GiB] / in no VG: 1 [10.00 GiB]


]# vgscan 

  Reading volume groups from cache.

  Found volume group "stack-volumes-lvmdriver-1" using metadata type lvm2

  Found volume group "stack-volumes-default" using metadata type lvm2

[root@localhost home]# vgdisplay

  --- Volume group ---

  VG Name               stack-volumes-lvmdriver-1

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  7

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                2

  Open LV               1

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               10.01 GiB

  PE Size               4.00 MiB

  Total PE              2562

  Alloc PE / Size       512 / 2.00 GiB

  Free  PE / Size       2050 / 8.01 GiB

  VG UUID               Evn3J0-O09c-9dei-2eKI-Y8Yl-lwhF-FqvTlb

   

  --- Volume group ---

  VG Name               stack-volumes-default

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  1

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               10.01 GiB

  PE Size               4.00 MiB

  Total PE              2562

  Alloc PE / Size       0 / 0   

  Free  PE / Size       2562 / 10.01 GiB

  VG UUID               jsQjQe-wiwI-1ZA2-lvBY-flvp-Bn5G-CGJrb1


]# vgcreate vg1 /dev/loop2

  Volume group "vg1" successfully created

[root@localhost home]# vgscan                 

  Reading volume groups from cache.

  Found volume group "stack-volumes-lvmdriver-1" using metadata type lvm2

  Found volume group "vg1" using metadata type lvm2

  Found volume group "stack-volumes-default" using metadata type lvm2

[root@localhost home]# vgdisplay              

  --- Volume group ---

  VG Name               stack-volumes-lvmdriver-1

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  7

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                2

  Open LV               1

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               10.01 GiB

  PE Size               4.00 MiB

  Total PE              2562

  Alloc PE / Size       512 / 2.00 GiB

  Free  PE / Size       2050 / 8.01 GiB

  VG UUID               Evn3J0-O09c-9dei-2eKI-Y8Yl-lwhF-FqvTlb

   

  --- Volume group ---

  VG Name               vg1

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  1

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               10.00 GiB

  PE Size               4.00 MiB

  Total PE              2559

  Alloc PE / Size       0 / 0   

  Free  PE / Size       2559 / 10.00 GiB

  VG UUID               qCfSGd-ymgV-DWJz-p35j-ZHNr-gCIK-Pgcaw3

   

  --- Volume group ---

  VG Name               stack-volumes-default

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  1

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                0

  Open LV               0

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               10.01 GiB

  PE Size               4.00 MiB

  Total PE              2562

  Alloc PE / Size       0 / 0   

  Free  PE / Size       2562 / 10.01 GiB

  VG UUID               jsQjQe-wiwI-1ZA2-lvBY-flvp-Bn5G-CGJrb1


]# lvcreate -n vg1_lo1 -L 1G vg1  

  Logical volume "vg1_lo1" created.

[root@localhost home]# vgdisplay vg1

  --- Volume group ---

  VG Name               vg1

  System ID             

  Format                lvm2

  Metadata Areas        1

  Metadata Sequence No  2

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                1

  Open LV               0

  Max PV                0

  Cur PV                1

  Act PV                1

  VG Size               10.00 GiB

  PE Size               4.00 MiB

  Total PE              2559

  Alloc PE / Size       256 / 1.00 GiB

  Free  PE / Size       2303 / 9.00 GiB

  VG UUID               qCfSGd-ymgV-DWJz-p35j-ZHNr-gCIK-Pgcaw3


]# fdisk  -l


Disk /dev/mapper/vg1-vg1_lo1: 1073 MB, 1073741824 bytes, 2097152 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes



## add 

]# losetup /dev/loop3 test2.img 

[root@localhost home]# losetup -a

/dev/loop0: [2051]:1316303 (/opt/stack/data/stack-volumes-default-backing-file)

/dev/loop1: [2051]:1316304 (/opt/stack/data/stack-volumes-lvmdriver-1-backing-file)

/dev/loop2: [2050]:12 (/home/test1.img)

/dev/loop3: [2050]:13 (/home/test2.img)


]# pvcreate /dev/loop3

  WARNING: Not using lvmetad because duplicate PVs were found.

  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?

  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.

  Physical volume "/dev/loop3" successfully created.


]# pvdisplay 

  WARNING: Not using lvmetad because duplicate PVs were found.

  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?

  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.

  --- Physical volume ---

  PV Name               /dev/loop2

  VG Name               vg1

  PV Size               10.00 GiB / not usable 4.00 MiB

  Allocatable           yes 

  PE Size               4.00 MiB

  Total PE              2559

  Free PE               2303

  Allocated PE          256

  PV UUID               9l5ff5-zqk7-qH8n-IEx9-tgjG-2Cjg-CT2L41

   

  --- Physical volume ---

  PV Name               /dev/loop1

  VG Name               stack-volumes-lvmdriver-1

  PV Size               10.01 GiB / not usable 2.00 MiB

  Allocatable           yes 

  PE Size               4.00 MiB

  Total PE              2562

  Free PE               2050

  Allocated PE          512

  PV UUID               DVNkcA-Syfm-5du0-0XFe-QnIE-D5zi-9h23wK

   

  --- Physical volume ---

  PV Name               /dev/loop0

  VG Name               stack-volumes-default

  PV Size               10.01 GiB / not usable 2.00 MiB

  Allocatable           yes 

  PE Size               4.00 MiB

  Total PE              2562

  Free PE               2562

  Allocated PE          0

  PV UUID               TOdLHi-diqH-I9eb-eJ6A-jv3P-YXUx-geAIwx

   

  "/dev/loop3" is a new physical volume of "10.00 GiB"

  --- NEW Physical volume ---

  PV Name               /dev/loop3

  VG Name               

  PV Size               10.00 GiB

  Allocatable           NO

  PE Size               0   

  Total PE              0

  Free PE               0

  Allocated PE          0

  PV UUID               PL6niV-hc1Y-JLTz-zx7k-8rlL-WHIf-8ekmvg


]# vgextend vg1 /dev/loop3 

  WARNING: Not using lvmetad because duplicate PVs were found.

  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?

  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.

  Volume group "vg1" successfully extended


]# vgdisplay  vg1

  WARNING: Not using lvmetad because duplicate PVs were found.

  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?

  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.

  --- Volume group ---

  VG Name               vg1

  System ID             

  Format                lvm2

  Metadata Areas        2

  Metadata Sequence No  3

  VG Access             read/write

  VG Status             resizable

  MAX LV                0

  Cur LV                1

  Open LV               0

  Max PV                0

  Cur PV                2

  Act PV                2

  VG Size               19.99 GiB

  PE Size               4.00 MiB

  Total PE              5118

  Alloc PE / Size       256 / 1.00 GiB

  Free  PE / Size       4862 / 18.99 GiB

  VG UUID               qCfSGd-ymgV-DWJz-p35j-ZHNr-gCIK-Pgcaw3


]# lvextend -L+10G /dev/vg1/vg1_lo1     

  WARNING: Not using lvmetad because duplicate PVs were found.

  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?

  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.

  Size of logical volume vg1/vg1_lo1 changed from 1.00 GiB (256 extents) to 11.00 GiB (2816 extents).

  Logical volume vg1/vg1_lo1 successfully resized.


fdisk -l /dev/mapper/vg1-vg1_lo1 


Disk /dev/mapper/vg1-vg1_lo1: 11.8 GB, 11811160064 bytes, 23068672 sectors

Units = sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes


## remove


]# lvremove /dev/vg1/vg1_lo1

  WARNING: Not using lvmetad because duplicate PVs were found.

  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?

  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.

Do you really want to remove active logical volume vg1/vg1_lo1? [y/n]: y

  Logical volume "vg1_lo1" successfully removed


]# vgremove vg1

  WARNING: Not using lvmetad because duplicate PVs were found.

  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?

  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.

  Volume group "vg1" successfully removed


]# pvremove /dev/loop2 /dev/loop3

  WARNING: Not using lvmetad because duplicate PVs were found.

  WARNING: Use multipath or vgimportclone to resolve duplicate PVs?

  WARNING: After duplicates are resolved, run "pvscan --cache" to enable lvmetad.

  Labels on physical volume "/dev/loop2" successfully wiped.

  Labels on physical volume "/dev/loop3" successfully wiped.


]# losetup -d /dev/loop2

[root@localhost home]# losetup -d /dev/loop3

[root@localhost home]# losetup -a

/dev/loop0: [2051]:1316303 (/opt/stack/data/stack-volumes-default-backing-file)

/dev/loop1: [2051]:1316304 (/opt/stack/data/stack-volumes-lvmdriver-1-backing-file)


'Virtualization' 카테고리의 다른 글

ceph  (0) 2017.01.19
kvm install  (0) 2017.01.03

os: centos7 minimal


partition:

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *        2048      976895      487424   83  Linux

/dev/sda2          976896    98631679    48827392   83  Linux

/dev/sda3        98631680   129880063    15624192   82  Linux swap / Solaris

/dev/sda4       129880064   488396799   179258368    5  Extended

/dev/sda5       129882112   488388607   179253248   83  Linux


network:

]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s25

TYPE=Ethernet

BOOTPROTO=static

DEFROUTE=yes

PEERDNS=yes

PEERROUTES=yes

IPV4_FAILURE_FATAL=no

IPV6INIT=yes

IPV6_AUTOCONF=yes

IPV6_DEFROUTE=yes

IPV6_PEERDNS=yes

IPV6_PEERROUTES=yes

IPV6_FAILURE_FATAL=no

NAME=enp0s25

UUID=f5aad67b-0b2e-46b9-b17d-126c9eaae4b1

DEVICE=enp0s25

ONBOOT=yes

IPADDR=123.140.248.88

NETMASK=255.255.255.0

GATEWAY=123.140.248.254


]# vi /etc/resolv.conf

search .

nameserver 210.220.163.82


selinux:

]# vi /etc/selinux/config

SELINUX=disabled


update:

]# yum update

]# yum install net-tools

]# reboot


kvm install:

# xming install in pc

https://sourceforge.net/projects/xming/


]# yum install qemu-kvm qemu-kvm-tools libvirt virt-install virt-manager virt-viewer virt-top dejavu-lgc-sans-fonts xorg-x11-xauth wget vim

]# systemctl start libvirtd

]# systemctl enable libvirtd

]# export NO_AT_BRIDGE=1

]# alis vi=vim

]# setterm -blength 0

]# mkdir /home/isos

]# cd /home/isos

]# wget http://ftp.daumkakao.com/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1611.iso 

]# virsh net-destroy default

]# virsh net-autostart default --disable

]# mkdir /root/rpms

]# cd /root/rpms

]# wget https://rdo.fedorapeople.org/openstack/openstack-kilo/rdo-release-kilo.rpm

]# rpm -Uvh rdo-release-kilo.rpm

]# yum install openvswitch

]# systemctl start openvswitch

]# systemctl enable openvswitch

]# vi /etc/sysconfig/network-scripts/ifcfg-ovsbr0

DEVICE=ovsbr0

ONBOOT=yes

DEVICETYPE=ovs

TYPE=OVSBridge

BOOTPROTO=static

IPADDR=192.168.100.1

NETMASK=255.255.255.0

HOTPLUG=no

ZONE=trusted


]# ifup ovsbr0

]# iptables -A POSTROUTING -s 192.168.100.0/24 -t nat -j MASQUERADE


]# vi /etc/libvirt/qemu/networks/public.xml

<network>

  <name>public</name>

  <forward mode='bridge'/>

  <bridge name='ovsbr0'/>

  <virtualport type='openvswitch'/>

</network>


]# vi /etc/libvirt/qemu/networks/private.xml

<network>

  <name>private</name>

  <forward mode='nat'/>

  <bridge name='virbr0' stp='on' delay='0'/>

  <mac address='52:54:00:e3:83:e1'/>

  <ip address='192.168.122.1' netmask='255.255.255.0'>

    <dhcp>

      <range start='192.168.122.2' end='192.168.122.254'/>

      <host mac='52:54:00:e5:22:c1' name='test-001' ip='192.168.122.2'/>

      <host mac='52:54:00:e5:22:c2' name='test-002' ip='192.168.122.3'/>

      <host mac='52:54:00:e5:22:c3' name='test-003' ip='192.168.122.4'/>

      <host mac='52:54:00:e5:22:c4' name='test-004' ip='192.168.122.5'/>

    </dhcp>

  </ip>

</network>


]# virsh net-define /etc/libvirt/qemu/networks/public.xml

]# virsh net-define /etc/libvirt/qemu/networks/private.xml

]# virsh net-start public

]# virsh net-start private

]# virsh net-autostart public

]# virsh net-autostart private


]# vi /etc/libvirt/qemu/test-001.xml

]# vi /etc/libvirt/qemu/test-002.xml

]# vi /etc/libvirt/qemu/test-003.xml

]# vi /etc/libvirt/qemu/test-004.xml

<domain type='kvm'>

  <name>test-001</name>

  <memory unit='KiB'>1048576</memory>

  <currentMemory unit='KiB'>1048576</currentMemory>

  <vcpu placement='static'>1</vcpu>

  <os>

    <type arch='x86_64' machine='pc-i440fx-rhel7.0.0'>hvm</type>

    <boot dev='hd'/>

  </os>

  <features>

    <acpi/>

    <apic/>

  </features>

  <cpu mode='custom' match='exact'>

    <model fallback='allow'>Penryn</model>

  </cpu>

  <clock offset='utc'>

    <timer name='rtc' tickpolicy='catchup'/>

    <timer name='pit' tickpolicy='delay'/>

    <timer name='hpet' present='no'/>

  </clock>

  <on_poweroff>destroy</on_poweroff>

  <on_reboot>restart</on_reboot>

  <on_crash>restart</on_crash>

  <pm>

    <suspend-to-mem enabled='no'/>

    <suspend-to-disk enabled='no'/>

  </pm>

  <devices>

    <emulator>/usr/libexec/qemu-kvm</emulator>

    <disk type='file' device='disk'>

      <driver name='qemu' type='raw'/>

      <source file='/var/lib/libvirt/images/test-001.img'/>

      <target dev='vda' bus='virtio'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>

    </disk>

    <controller type='usb' index='0' model='ich9-ehci1'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>

    </controller>

    <controller type='usb' index='0' model='ich9-uhci1'>

      <master startport='0'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>

    </controller>

    <controller type='usb' index='0' model='ich9-uhci2'>

      <master startport='2'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>

    </controller>

    <controller type='usb' index='0' model='ich9-uhci3'>

      <master startport='4'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>

    </controller>

    <controller type='pci' index='0' model='pci-root'/>

    <controller type='virtio-serial' index='0'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>

    </controller>

    <interface type='network'>

      <mac address='52:54:00:05:d1:c1'/>

      <source network='public'/>

      <model type='virtio'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>

    </interface>

    <interface type='network'>

      <mac address='52:54:00:e5:22:c1'/>

      <source network='private'/>

      <model type='virtio'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>

    </interface>

    <serial type='pty'>

      <target port='0'/>

    </serial>

    <console type='pty'>

      <target type='serial' port='0'/>

    </console>

    <channel type='unix'>

      <target type='virtio' name='org.qemu.guest_agent.0'/>

      <address type='virtio-serial' controller='0' bus='0' port='1'/>

    </channel>

    <channel type='spicevmc'>

      <target type='virtio' name='com.redhat.spice.0'/>

      <address type='virtio-serial' controller='0' bus='0' port='2'/>

    </channel>

    <input type='tablet' bus='usb'>

      <address type='usb' bus='0' port='1'/>

    </input>

    <input type='mouse' bus='ps2'/>

    <input type='keyboard' bus='ps2'/>

    <graphics type='spice' autoport='yes'>

      <listen type='address'/>

      <image compression='off'/>

    </graphics>

    <sound model='ich6'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>

    </sound>

    <video>

      <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>

    </video>

    <redirdev bus='usb' type='spicevmc'>

      <address type='usb' bus='0' port='2'/>

    </redirdev>

    <redirdev bus='usb' type='spicevmc'>

      <address type='usb' bus='0' port='3'/>

    </redirdev>

    <memballoon model='virtio'>

      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>

    </memballoon>

  </devices>

</domain>


]# qemu-img convert test-001.img test-002.img 

]# qemu-img convert test-001.img test-003.img 

]# qemu-img convert test-001.img test-004.img 


]# virsh define test-001.xml

]# virsh define test-002.xml

]# virsh define test-003.xml

]# virsh define test-004.xml


]# virsh start test-001

]# virsh start test-002

]# virsh start test-003

]# virsh start test-004



test-001~ test-004

]# cat /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE=eth0

ONBOOT=yes

BOOTPROTO=static

IPADDR=192.168.100.2

NETMASK=255.255.255.0

GATEWAY=192.168.100.1


]# cat /etc/sysconfig/network-scripts/ifcfg-eth1

DEVICE=eth1

BOOTPROTO=dhcp

ONBOOT=yes


]# yum remove NetworkManager


]# /usr/sbin/dhclient-script

506c506

<     make_resolv_conf

---

>     #make_resolv_conf

592c592

<             make_resolv_conf

---

>             #make_resolv_conf

608c608

<                 make_resolv_conf

---

>                 #make_resolv_conf


]# vi /etc/resolv.conf

search .

nameserver 210.220.163.82


etc

]# ovs-vsctl list-ports ovsbr0

vnet0

vnet2

vnet4

vnet6

[root@localhost qemu]# brctl show

bridge name     bridge id               STP enabled     interfaces

virbr0          8000.525400e383e1       yes             virbr0-nic

                                                        vnet1

                                                        vnet3

                                                        vnet5

                                                        vnet7




http://prolinuxhub.com/configure-start-up-scripts-for-ovs-on-centos-and-red-hat/


[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-enp0s25

TYPE=OVSPort

BOOTPROTO=none

DEVICE=enp0s25

DEVICETYPE=ovs

ONBOOT=yes

HOTPLUG=no

OVS_BRIDGE=ovsbr0


[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-ovsbr0

DEVICE=ovsbr0

ONBOOT=yes

DEVICETYPE=ovs

TYPE=OVSBridge

BOOTPROTO=static

IPADDR=123.140.248.88

NETMASK=255.255.255.0

GATEWAY=13.140.248.254

HOTPLUG=no










'Virtualization' 카테고리의 다른 글

ceph  (0) 2017.01.19
LVM2  (0) 2017.01.12


from boto.s3.connection import S3Connection


access_key = "*****5JJPZD6WYG*****"

secret_key = "*****GnYf/bmdJ9NfkveFF+Mb8IPjVIGybC*****"

region = "s3-ap-northeast-2.amazonaws.com"

conn = S3Connection(access_key, secret_key, host=region)

bucket = conn.get_bucket('ghpark-mp3')

for key in bucket.list():

    print key.name

 


regions 

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html


seoul region을 명시해줘야 400 에러 발생하지 않음 ㅡㅜ 


http://boto.cloudhackers.com/en/latest/s3_tut.html


import os

import fnmatch

from boto.s3.connection import S3Connection


conn = S3Connection('*****5JJPZD6WYG*****','*****GnYf/bmdJ9NfkveFF+Mb8IPjVIGybC*****', host="s3-ap-northeast-2.amazonaws.com")

bucket = conn.get_bucket('ghpark-mp3')

for key in bucket.list():

    print key.name

    # download

    #key.get_contents_to_filename(key.name)

    # delete

    #key.delete()


for f in os.listdir("."):

    if fnmatch.fnmatch(f, "*.mp3"):

        f = f.encode("UTF-8")

        k = bucket.new_key(f)

        # upload

        k.set_contents_from_filename(f)



s3bucket 용량 구하는거 ... 깔끔하네

https://gist.github.com/robinkraft/2667939



import boto

s3 = boto.connect_s3(aws_id, aws_secret_key)


# based on http://www.quora.com/Amazon-S3/What-is-the-fastest-way-to-measure-the-total-size-of-an-S3-bucket


def get_bucket_size(bucket_name):

    bucket = s3.lookup(bucket_name)

    total_bytes = 0

    n = 0

    for key in bucket:

        total_bytes += key.size

        n += 1

        if n % 2000 == 0:

            print n

    total_gigs = total_bytes/1024/1024/1024

    print "%s: %i GB, %i objects" % (bucket_name, total_gigs, n)

    return total_gigs, n


bucket_list = []

bucket_sizes = []


for bucket_name in bucket_list:

    size, object_count = get_bucket_size(bucket_name)

    bucket_sizes.append(dict(name=bucket_name, size=size, count=object_count))


print "\nTotals:"

for bucket_size in bucket_sizes:

    print "%s: %iGB, %i objects" % (bucket_size["name"], bucket_size["size"], bucket_size["count"]) 




'Python' 카테고리의 다른 글

Django + djangorestframework + django_rest_swagger 시작  (0) 2017.02.01
Pika Python AMQP Client Library  (0) 2017.01.31
daemonizing  (0) 2016.12.22
Threading  (0) 2016.12.22
pidlockfile.py for windows  (0) 2016.12.19

+ Recent posts