django+elesticsearch

docker搭建elasticsearch

下载镜像

# 下载镜像
docker pull delron/elasticsearch-ik:2.4.6-1.0

加载配置文件

配置的目录: /root/elastic_config/config

elasticsearch.yml

network.host 需要更新为 本机IP

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please see the documentation for further information on configuration options:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/setup-configuration.html>
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
# node.name: node-1
#
# Add custom attributes to the node:
#
# node.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /path/to/data
#
# Path to log files:
#
# path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
# bootstrap.memory_lock: true
#
# Make sure that the `ES_HEAP_SIZE` environment variable is set to about half the memory
# available on the system and that the owner of the process is allowed to use this limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.8.10
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-network.html>
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
# discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#
# Prevent the "split brain" by configuring the majority of nodes (total number of nodes / 2 + 1):
#
# discovery.zen.minimum_master_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html>
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
# gateway.recover_after_nodes: 3
#
# For more information, see the documentation at:
# <http://www.elastic.co/guide/en/elasticsearch/reference/current/modules-gateway.html>
#
# ---------------------------------- Various -----------------------------------
#
# Disable starting multiple nodes on a single system:
#
# node.max_local_storage_nodes: 1
#
# Require explicit names when deleting indices:
#
# action.destructive_requires_name: true

logging.yml

# you can override this using by setting a system property, for example -Des.logger.level=DEBUG
es.logger.level: INFO
rootLogger: ${es.logger.level}, console, file
logger:
  # log action execution errors for easier debugging
  action: DEBUG

  # deprecation logging, turn to DEBUG to see them
  deprecation: INFO, deprecation_log_file

  # reduce the logging for aws, too much is logged under the default INFO
  com.amazonaws: WARN
  # aws will try to do some sketchy JMX stuff, but its not needed.
  com.amazonaws.jmx.SdkMBeanRegistrySupport: ERROR
  com.amazonaws.metrics.AwsSdkMetrics: ERROR

  org.apache.http: INFO

  # gateway
  #gateway: DEBUG
  #index.gateway: DEBUG

  # peer shard recovery
  #indices.recovery: DEBUG

  # discovery
  #discovery: TRACE

  index.search.slowlog: TRACE, index_search_slow_log_file
  index.indexing.slowlog: TRACE, index_indexing_slow_log_file

additivity:
  index.search.slowlog: false
  index.indexing.slowlog: false
  deprecation: false

appender:
  console:
    type: console
    layout:
      type: consolePattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  file:
    type: dailyRollingFile
    file: ${path.logs}/${cluster.name}.log
    datePattern: "'.'yyyy-MM-dd"
    layout:
      type: pattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %.10000m%n"

  # Use the following log4j-extras RollingFileAppender to enable gzip compression of log files. 
  # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html
  #file:
    #type: extrasRollingFile
    #file: ${path.logs}/${cluster.name}.log
    #rollingPolicy: timeBased
    #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz
    #layout:
      #type: pattern
      #conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  deprecation_log_file:
    type: dailyRollingFile
    file: ${path.logs}/${cluster.name}_deprecation.log
    datePattern: "'.'yyyy-MM-dd"
    layout:
      type: pattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  index_search_slow_log_file:
    type: dailyRollingFile
    file: ${path.logs}/${cluster.name}_index_search_slowlog.log
    datePattern: "'.'yyyy-MM-dd"
    layout:
      type: pattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

  index_indexing_slow_log_file:
    type: dailyRollingFile
    file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log
    datePattern: "'.'yyyy-MM-dd"
    layout:
      type: pattern
      conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n"

给权限

把配置的文件放置到config中, 然后给777权限

chmod -R 777 /config

运行

docker run -dit --name=elasticsearch --network=host -v /root/elastic_config/config:/usr/share/elasticsearch/config delron/elasticsearch-ik:2.4.6-1.0

效验

http://192.16.8.10:9200

配置使用 haystack

1 pip安装

pip install django-haystack
pip install elasticsearch==2.4.1
pip install pysolr

2 配置文件添加

INSTALLED_APPS = [
    'haystack', # 全文检索
]


HAYSTACK_CONNECTIONS = {
    "default": {
        "ENGINE": "haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine",  # 指定ES的引擎
        "URL": "http://192.168.8.10:9200/",  # Elasticsearch服务器ip地址,端口号固定为9200
        "INDEX_NAME": "haystack",  # Elasticsearch 建立的索引库的名称
    },
}

3 url配置

# 配置搜索
path('search/', include('haystack.urls')),

4 指定应用

在需要生成索引的子应用中创建文件:文件的名字必须为: search_indexes.py

from haystack import indexes

from apps.goods.models import SKU


class SKUIndex(indexes.SearchIndex,indexes.Indexable):
    # 每个都SearchIndex需要有一个(也是唯一一个)字段 document=True。
    # 这向Haystack和搜索引擎指示哪个字段是用于在其中搜索的主要字段。

    #允许我们使用数据模板(而不是容易出错的串联)来构建搜索引擎将索引的文档
    # 'name,caption,id'

    #惯例是命名此字段text
    text = indexes.CharField(document=True, use_template=True)


    def get_model(self):
        # 返回对哪个模型进行检索
        return SKU

    def index_queryset(self, using=None):
        #对哪些数据进行检索
        return self.get_model().objects.filter(is_launched=True)
        # return self.get_model().objects.all()
        # return SKU.objects.all()
        # pass

5 配置模板

指定目录的位置: use_template
目录结构必须市这个样子的:模板文件/search/indexes/子应用的同名文件/模型类_text.txt
在模型类_text.txt添加 如:{{ object.name }} ———> {{ object.字段名 }}

6 建立索引

python manage.py rebuild_index

7 搜索验证

http://www.meiduo.site:8003/search/?q=huawei

原文链接: django+elesticsearch 版权所有,转载时请注明出处,违者必究。
注明出处格式:流沙团 ( https://www.gyarmy.com/post-762.html )

发表评论

0则评论给“django+elesticsearch”