作者 主題: squid proxy server 的流量偵測...  (閱讀 8079 次)

0 會員 與 1 訪客 正在閱讀本文。

ychuang

  • 可愛的小學生
  • *
  • 文章數: 24
    • 檢視個人資料
squid proxy server 的流量偵測...
« 於: 2002-03-09 22:36 »
  嗨!站長及各位好:
     弟每次要使用mrtg 偵測proxy server的使用量時,都會出現以下的
  錯誤訊息,但是把它取消只偵測網卡的流量又恢復正常了...
=====================================================================
SNMP Error:
no response received
SNMPv1_Session (remote host: "61.70.72.161" [61.70.72.161].3401)
                  community: "public"
                 request ID: 204105268
                PDU bufsize: 8000 bytes
                    timeout: 2s
                    retries: 5
                    backoff: 1)
 at /usr/bin/../lib/mrtg2/SNMP_util.pm line 450
SNMPGET Problem for cacheHttpHits cacheClientHttpRequests on public@61.70.72.161:3401
 at /usr/bin/mrtg line 1491
WARNING: Expected a number but got ''
WARNING: Expected a number but got ''
=====================================================================
proxy server 的snmp 服務設定如下:
acl snmppublic snmp_community public
snmp_port 3401
snmp_access allow snmppublic

是否是squid 的snmp 服務問題啊???

VBird

  • 管理員
  • 俺是博士!
  • *****
  • 文章數: 1516
    • 檢視個人資料
    • http://linux.vbird.org
squid proxy server 的流量偵測...
« 回覆 #1 於: 2002-03-10 00:52 »
我記得如果要開啟 squid 的 snmp 服務的話,那麼 squid 的執行檔應該是需要重新編譯的!如果您是使用 Red Hat 預設的 squid 的話,那麼很抱歉, snmp 似乎並沒有在預設的開放服務當中的!

如果是以 tar.gz 安裝的話,那麼就可以自行重新編譯囉!
./configure --prefix=/usr/local/squid  
--enable-icmp  --enable-async-io=40  
--enable-err-language="Traditional_Chinese"  
--enable-cache-digests
--enable-snmp
當然囉!這還得視你的情況而定的!上面的只是提供一個參考的例子而已!

ychuang

  • 可愛的小學生
  • *
  • 文章數: 24
    • 檢視個人資料
squid proxy server 的流量偵測...
« 回覆 #2 於: 2002-04-02 23:44 »
引用

在 2002-03-10 00:52, VBird 寫了:
我記得如果要開啟 squid 的 snmp 服務的話,那麼 squid 的執行檔應該是需要重新編譯的!如果您是使用 Red Hat 預設的 squid 的話,那麼很抱歉, snmp 似乎並沒有在預設的開放服務當中的!

如果是以 tar.gz 安裝的話,那麼就可以自行重新編譯囉!
./configure --prefix=/usr/local/squid  
--enable-icmp  --enable-async-io=40  
--enable-err-language="Traditional_Chinese"  
--enable-cache-digests
--enable-snmp
當然囉!這還得視你的情況而定的!上面的只是提供一個參考的例子而已!



 我重新安裝後,又出現另一個錯誤訊息了...
SNMP Error:
Received SNMP response with error code
  error status: noSuchName
  index 2 (OID: 1.3.6.1.4.1.3495.1.5.2.1.2)
SNMPv1_Session (remote host: "ych.dyndns.org" [61.70.72.161].3401)
                  community: "public"
                 request ID: 1370233135
                PDU bufsize: 8000 bytes
                    timeout: 2s
                    retries: 5
                    backoff: 1)
 at /usr/bin/../lib/mrtg2/SNMP_util.pm line 450
SNMPGET Problem for cacheHttpHits cacheClientHttpRequests on public@ych.dyndns.o
rg:3401
 at /usr/bin/mrtg line 1491
WARNING: Expected a number but got ''
WARNING: Expected a number but got ''

ghostlin

  • 憂鬱的高中生
  • ***
  • 文章數: 101
    • 檢視個人資料
    • http://tinyspace.dns2go.com
squid proxy server 的流量偵測...
« 回覆 #3 於: 2002-04-03 11:49 »
SNMP Error:
Received SNMP response with error code
error status: noSuchName

唉....k 了 mrtg.org 上的東東,也照著做,參考了兩本書範例.....還是...

上面三行是我最常見的錯誤訊息了.....

不知是否有人能將關於 squid 在 mrtg.cfg 的設定 post 上來參考,

至少也可排除這部份錯誤的可能
inux 速成法.........K 原文......#:-O
網路安全.....當遇上問題才知重要

ychuang

  • 可愛的小學生
  • *
  • 文章數: 24
    • 檢視個人資料
squid proxy server 的流量偵測...
« 回覆 #4 於: 2002-10-06 11:54 »
引述: "ghostlin"
SNMP Error:
Received SNMP response with error code
error status: noSuchName

唉....k 了 mrtg.org 上的東東,也照著做,參考了兩本書範例.....還是...

上面三行是我最常見的錯誤訊息了.....

不知是否有人能將關於 squid 在 mrtg.cfg 的設定 post 上來參考,

至少也可排除這部份錯誤的可能


不論用tar或rpm 安裝, 都會出現"Received SNMP response with error code"
這個錯誤,我想我大概知道問題了...
以community 為public 詢問snmp port 為3401的192.168.1.1主機
=============================================
# snmpwalk -p 3401 192.168.1.1 public .1.3.6.1.4.1.3495.1.3.2.1
enterprises.3495.1.3.2.1.1.0 = Counter32: 321
enterprises.3495.1.3.2.1.2.0 = Counter32: 2
enterprises.3495.1.3.2.1.3.0 = Counter32: 0
enterprises.3495.1.3.2.1.4.0 = Counter32: 129
enterprises.3495.1.3.2.1.5.0 = Counter32: 1360
enterprises.3495.1.3.2.1.6.0 = Counter32: 0
enterprises.3495.1.3.2.1.7.0 = Counter32: 0
enterprises.3495.1.3.2.1.8.0 = Counter32: 0
enterprises.3495.1.3.2.1.9.0 = Counter32: 0
enterprises.3495.1.3.2.1.10.0 = 317
enterprises.3495.1.3.2.1.11.0 = 0
enterprises.3495.1.3.2.1.12.0 = Counter32: 1336
enterprises.3495.1.3.2.1.13.0 = Counter32: 153
enterprises.3495.1.3.2.1.14.0 = Counter32: 1448
enterprises.3495.1.3.2.1.15.0 = Counter32: 1
=============================================
在=後的"Counter32:",是導致snmp 在抓數值時,卻抓到字串,所以會出現error status: noSuchName,這是我看了好幾次"Linux網路實作經典"這本書才發現的,
卻束手無策無法解決問題的所在,希望netman、vbird兄或是其他高手能幫我解
惑,感激不盡...

zoob

  • 鑽研的研究生
  • *****
  • 文章數: 776
    • 檢視個人資料
    • http://www.myunix.idv.tw
squid proxy server 的流量偵測...
« 回覆 #5 於: 2002-10-06 14:13 »
我在這裡列出我的做法

1.重新compile squid
在重新下configure的時候,會加上--enable-snmp的選項

2.修改squid.conf
acl snmppublic snmp_community public
snmp_port 3401
snmp_access allow snmppublic all

restart squid service,看看有沒有listen 3401的udp port

3.我列出我的mrtg for squid config
######################################################################
# Multi Router Traffic Grapher -- squid Configuration File
######################################################################
# This file is for use with mrtg-2.0
#
# Customized for monitoring Squid Cache
# by Chris Miles  
# http://www.psychofx.com/chris/unix/mrtg/
# To use:
#  - change WorkDir and LoadMIBs settings
#  - change all  "192.168.1.1" occurrences to your squid host
#  - change all  "chris" occurrences to your name/address
#  - change the community strings if required (eg:  "public"
#  - change the snmp port if required (eg: 3401)
#
# Note:
#
# * Keywords must start at the begin of a line.
#
# * Lines which follow a keyword line which do start
#   with a blank are appended to the keyword line
#
# * Empty Lines are ignored
#
# * Lines starting with a # sign are comments.

# ####################
# Global Configuration
# ####################

# Where should the logfiles, and webpages be created?
WorkDir: /var/www/html/mrtg/squid119

# --------------------------
# Optional Global Parameters
# --------------------------

# How many seconds apart should the browser (Netscape) be
# instructed to reload the page? If this is not defined, the
# default is 300 seconds (5 minutes).

# Refresh: 600

# How often do you call mrtg? The default is 5 minutes. If
# you call it less often, you should specify it here. This
# does two things:

# a) the generated HTML page does contain the right
#    information about the calling interval ...

# b) a META header in the generated HTML page will instruct
#    caches about the time to live of this page .....

# In this example we tell mrtg that we will be calling it
# every 10 minutes. If you are calling mrtg every 5
# minutes, you can leave this line commented out.

# Interval: 10

# With this switch mrtg will generate .meta files for CERN
# and Apache servers which contain Expiration tags for the
# html and gif files. The *.meta files will be created in
# the same directory as the other files, so you might have
# to set  "MetaDir ." in your srm.conf file for this to work
#
# NOTE: If you are running Apache-1.2 you can use the mod_expire
# to achieve the same effect ... see the file htaccess-dist

WriteExpires: Yes

# If you want to keep the mrtg icons in some place other than the
# working directory, use the IconDir varibale to give its url.

# IconDir: /mrtgicons/
IconDir: /icons/mrtg/

LoadMIBs: /etc/squid/etc/mib.txt

# #################################################
# Configuration for each Target you want to monitor
# #################################################

# The configuration keywords  "Target" must be followed by a
# unique name. This will also be the name used for the
# webpages, logfiles and gifs created for that target.

# Note that the  "Target" sections can be auto-generated with
# the cfgmaker tool. Check readme.html for instructions.
#     ========

##
## Target ----------------------------------------
##

# With the  "Target" keyword you tell mrtg what it should
# monitor. The  "Target" keyword takes arguments in a wide
# range of formats:

# * The most basic format is  "port:community@router"
#   This will generate a traffic graph for port 'port'
#   of the router 'router' and it will use the community
#   'community' for the snmp query.

# Target[ezwf]: 2:public@wellfleet-fddi.ethz.ch

# * Sometimes you are sitting on the wrong side of the
#   link. And you would like to have mrtg report Incoming
#   traffic as outgoing and visa versa. This can be achieved
#   by adding the '-' sign in front of the  "Target"
#   description. It flips the in and outgoing traffic rates.

# Target[ezci]: -1:public@ezci-ether.ethz.ch

# * You can also explicitly define the OID to query by using the
#   following syntax 'OID_1&OID_2:community@router'
#   The following example will retrieve error input and output
#   octets/sec on interface 1.  MRTG needs to graph two values, so
#   you need to specify two OID's such as temperature and humidity
#   or error input and error output.

# Target[ezwf]: 1.3.6.1.2.1.2.2.1.14.1&1.3.6.1.2.1.2.2.1.20.1:public@myrouter

# * mrtg knows a number of symbolical SNMP variable
#   names. See the file mibhelp.txt for a list of known
#   names. One example are the ifInErrors and and ifOutErrors
#   names. This means you can specify the above as:

# Target[ezwf]: ifInErrors.1&ifOutErrors.1:public@myrouter

# * if you want to monitor something which does not provide
#   data via snmp you can use some external program to do
#   the data gathering.

#
#   The external command must return 4 lines of output:
#     Line 1 : current state of the 'incoming bytes counter'
#     Line 2 : current state of the 'outgoing bytes counter'
#     Line 3 : string, telling the uptime of the target.
#     Line 4 : string, telling the name of the target.

#   Depending on the type of data your script returns you
#   might want to use the 'gauge' or 'absolute' arguments
#   for the  "Options" keyword.

# Target[ezwf]: `/usr/local/bin/df2mrtg /dev/dsk/c0t2d0s0`

# * You can also use several statements in a mathematical
#   expression.  This could be used to aggregate both B channels
#   in an ISDN connection or multiple T1's that are aggregated
#   into a single channel for greater bandwidth.
#   Note the whitespace arround the target definitions.

# Target[ezwf]: 2:public@wellfleetA + 1:public@wellfleetA
#              * 4:public@ciscoF

##
## RouterUptime ---------------------------------------
##
#
# In cases where you calculate the used bandwidth from
# several interfaces you normaly don't get the routeruptime
# and routername displayed on the web page.
# If this interface are on the same router and the uptime and
# name should be displayed nevertheless you have to specify
# its community and address again with the RouterUptime keyword.

# Target[kacisco]: 1:public@194.64.66.250 + 2:public@194.64.66.250
# RouterUptime[kacisco]: public@194.64.66.250

##
## MaxBytes -------------------------------------------
##

# How many bytes per second can this port carry. Since most
# links are rated in bits per second, you need to divide
# their maximum bandwidth (in bits) by eight ( in order to get
# bytes per second.  This is very important to make your
# unscaled graphs display realistic information.  
# T1 = 193000, 56K = 7000, Ethernet = 1250000. The  "MaxBytes"
# value will be used by mrtg to decide whether it got a
# valid response from the router. If a number higher than
#  "MaxBytes" is returned, it is ignored. Also read the section
# on AbsMax for further info.

# MaxBytes[ezwf]: 1250000

##
## Title -----------------------------------------------
##

# Title for the HTML page which gets generated for the graph.

# Title[ezwf]: Traffic Analysis for ETZ C 95.1

##
## PageTop ---------------------------------------------
##

# Things to add to the top of the generated HTML page.  Note
# that you can have several lines of text as long as the
# first column is empty.
# Note that the continuation lines will all end up on the same
# line in the html page. If you want linebreaks in the generated
# html use the '\n' sequence.

# PageTop[ezwf]:  

Traffic Analysis for ETZ C95.1


#  Our Campus Backbone runs over an FDDI line\n
#  with a maximum transfer rate of 12.5 Mega Bytes per
#  Second.

##
## PageFoot ---------------------------------------------
##

# Things to add at the very end of the mrtg generated html page

# PageFoot[ezwf]:  
This page is managed by Blubber

# --------------------------------------------------
# Optional Target Configuration Tags
# --------------------------------------------------

##
## AddHead -----------------------------------------
##

# Use this tag like the PageTop header, but its contents
# will be added between   and  .

# AddHead[ezwf]:  

##
## AbsMax ------------------------------------------
##

# If you are monitoring a link which can handle more traffic
# than the MaxBytes value. Eg, a line which uses compression
# or some frame relay link, you can use the AbsMax keyword
# to give the absolute maximum value ever to be reached. We
# need to know this in order to sort out unrealistic values
# returned by the routers. If you do not set absmax, rateup
# will ignore values higher then MaxBytes.

# AbsMax[ezwf]: 2500000

##
## Unscaled ------------------------------------------
##

# By default each graph is scaled vertically to make the
# actual data visible even when it is much lower than
# MaxBytes.  With the  "Unscaled" variable you can suppress
# this.  It's argument is a string, containing one letter
# for each graph you don't want to be scaled: d=day w=week
# m=month y=year.  In the example I suppress scaling for the
# yearly and the monthly graph.

# Unscaled[ezwf]: ym

##
## WithPeak ------------------------------------------
##

# By default the graphs only contain the average transfer
# rates for incoming and outgoing traffic. The
# following option instructs mrtg to display the peak
# 5 minute transfer rates in the [w]eekly, [m]onthly and
# [y]early graph. In the example we define the monthly
# and the yearly graph to contain peak as well as average
# values.

# WithPeak[ezwf]: ym

##
## Supress ------------------------------------------
##

# By Default mrtg produces 4 graphs. With this option you
# can suppress the generation of selected graphs.  The format
# is analog to the above option. In this example we suppress
# the yearly graph as it is quite empty in the beginning.

# Suppress[ezwf]: y

##
## Directory
##

# By default, mrtg puts all the files that it generates for each
# router (the GIFs, the HTML page, the log file, etc.) in WorkDir.
# If the  "Directory" option is specified, the files are instead put
# into a directory under WorkDir.  (For example, given the options in
# this mrtg.cfg-dist file, the  "Directory" option below would cause all
# the ezwf files to be put into /usr/tardis/pub/www/stats/mrtg/ezwf .)
#
# The directory must already exist; mrtg will not create it.

# Directory[ezwf]: ezwf

##
## XSize and YSize ------------------------------------------
##

# By Default mrtgs graphs are 100 by 400 pixels wide (plus
# some more for the labels. In the example we get almost
# square graphs ...
# Note: XSize must be between 20 and 600
#       YSize must be larger than 20

# XSize[ezwf]: 300
# YSize[ezwf]: 300

##
## XZoom YZoom -------------------------------------------------
##

# If you want your graphs to have larger pixels, you can
#  "Zoom" them.

#XZoom[ezwf]: 2.0
#YZoom[ezwf]: 2.0

##
## XScale YScale -------------------------------------------------
##

# If you want your graphs to be actually scaled use XScale
# and YScale. (Beware while this works, the results look ugly
# (to be frank) so if someone wants fix this: patches are
# welcome.

# XScale[ezwf]: 1.5
# YScale[ezwf]: 1.5

##
## Step -----------------------------------------------------------
##

# Change the default step with from 5 * 60 seconds to
# something else I have not tested this well ...

# Step[ezwf]: 60

##
## Options ------------------------------------------
##

# The  "Options" Keyword allows you to set some boolean
# switches:
#
# growright - The graph grows to the left by default.
#
# bits -      All the numbers printed are in bits instead
#             of bytes ... looks much more impressive
#
# noinfo -    Supress the information about uptime and
#             device name in the generated webpage.
#
# absolute -  This is for data sources which reset their
#             value when they are read. This means that
#             rateup has not to build the difference between
#             this and the last value read from the data
#             source. Useful for external data gatherers.
#
# gauge -     Treat the values gathered from target as absolute
#             and not as counters. This would be useful to
#             monitor things like diskspace, load and so
#             on ....
#
# nopercent   Don't print usage percentages
#
# integer     Don't print only integers in the summary ...
#

# Options[ezwf]: growright, bits

##
## Colours ------------------------------------------
##

# The  "Colours" tag allows you to override the default colour
# scheme.  Note: All 4 of the required colours must be
# specified here The colour name ('Colourx' below) is the
# legend name displayed, while the RGB value is the real
# colour used for the display, both on the graph and n the
# html doc.

# Format is: Colour1#RRGGBB,Colour2#RRGGBB,Colour3#RRGGBB,Colour4#RRGGBB
#    where: Colour1 = Input on default graph
#           Colour2 = Output on default graph
#           Colour3 = Max input
#           Colour4 = Max output
#           RRGGBB  = 2 digit hex values for Red, Green and Blue

# Colours[ezwf]: GREEN#00eb0c,BLUE#1000ff,DARK GREEN#006600,VIOLET#ff00ff

##
## Background ------------------------------------------
##

# With the  "Background" tag you can configure the background
# colour of the generated HTML page

# Background[ezwf]: #a0a0a0a

##
## YLegend, ShortLegend, Legend[1234] ------------------
##

# The following keywords allow you to override the text
# displayed for the various legends of the graph and in the
# HTML document
#
# * YLegend : The Y-Axis of the graph
# * ShortLegend: The 'b/s' string used for Max, Average and Current
# * Legend[1234IO]: The strings for the colour legend
#
#YLegend[ezwf]: Bits per Second
#ShortLegend[ezwf]: b/s
#Legend1[ezwf]: Incoming Traffic in Bits per Second
#Legend2[ezwf]: Outgoing Traffic in Bits per Second
#Legend3[ezwf]: Maximal 5 Minute Incoming Traffic
#Legend4[ezwf]: Maximal 5 Minute Outgoing Traffic
#LegendI[ezwf]:  In:
#LegendO[ezwf]:  Out:
# Note, if LegendI or LegendO are set to an empty string with
# LegendO[ezwf]:
# The corresponding line below the graph will not be printed at all.

# If you live in an international world, you might want to
# generate the graphs in different timezones. This is set in the
# TZ variable. Under certain operating systems like Solaris,
# this will provoke the localtime call to giv the time in
# the selected timezone ...

# Timezone[ezwf]: Japan

# The Timezone is the standard Solaris timezone, ie Japan, Hongkong,
# GMT, GMT+1 etc etc.

# By default, mrtg (actually rateup) uses the strftime(3) '%W' option
# to format week numbers in the monthly graphs.  The exact semantics
# of this format option vary between systems.  If you find that the
# week numbers are wrong, and your system's strftime(3) routine
# supports it, you can try another format option.  The POSIX '%V'
# option seems to correspond to a widely used week numbering
# convention.  The week format character should be specified as a
# single letter; either W, V, or U.

# Weekformat[ezwf]: V

# #############################
# Two very special Target names
# #############################

# To save yourself some typing you can define a target
# called '^'. The text of every Keyword you define for this
# target will be PREPENDED to the corresponding Keyword of
# all the targets defined below this line. The same goes for
# a Target called '$' but its options will be APPENDED.
#
# The example will make mrtg use a common header and a
# common contact person in all the pages generated from
# targets defined later in this file.
#
#PageTop[^]:  

NoWhere Unis Traffic Stats



#PageTop[$]: Contact Peter Norton if you have any questions


PageFoot[^]:  Page managed by  Chris Miles

Target[cacheServerRequests]: cacheServerRequests&cacheServerRequests:public@192.168.1.1:3401
MaxBytes[cacheServerRequests]: 10000000
Title[cacheServerRequests]: Server Requests @ 192.168.1.1
Options[cacheServerRequests]: nopercent
PageTop[cacheServerRequests]:  

Server Requests @ 192.168.1.1


YLegend[cacheServerRequests]: requests/sec
ShortLegend[cacheServerRequests]: req/s
LegendI[cacheServerRequests]: Requests 
LegendO[cacheServerRequests]:
Legend1[cacheServerRequests]: Requests
Legend2[cacheServerRequests]:

Target[cacheServerErrors]: cacheServerErrors&cacheServerErrors:public@192.168.1.1:3401
MaxBytes[cacheServerErrors]: 10000000
Title[cacheServerErrors]: Server Errors @ 192.168.1.1
Options[cacheServerErrors]: nopercent
PageTop[cacheServerErrors]:  

Server Errors @ 192.168.1.1


YLegend[cacheServerErrors]: errors/sec
ShortLegend[cacheServerErrors]: err/s
LegendI[cacheServerErrors]: Errors 
LegendO[cacheServerErrors]:
Legend1[cacheServerErrors]: Errors
Legend2[cacheServerErrors]:

Target[cacheServerInOutKb]: cacheServerInKb&cacheServerOutKb:public@192.168.1.1:3401 * 1024
MaxBytes[cacheServerInOutKb]: 1000000000
Title[cacheServerInOutKb]: Server In/Out Traffic @ 192.168.1.1
Options[cacheServerInOutKb]: nopercent, bits
PageTop[cacheServerInOutKb]:  

Server In/Out Traffic @ 192.168.1.1


YLegend[cacheServerInOutKb]: bits/sec
ShortLegend[cacheServerInOutKb]: bits/s
LegendI[cacheServerInOutKb]: Server In 
LegendO[cacheServerInOutKb]: Server Out 
Legend1[cacheServerInOutKb]: Server In
Legend2[cacheServerInOutKb]: Server Out

#Target[cacheClientHttpRequests]: cacheClientHttpRequests&cacheClientHttpRequests:public@192.168.1.1:3401
#MaxBytes[cacheClientHttpRequests]: 10000000
#Title[cacheClientHttpRequests]: Client Http Requests @ 192.168.1.1
#Options[cacheClientHttpRequests]: nopercent
#PageTop[cacheClientHttpRequests]:  

Client Http Requests @ 192.168.1.1


#YLegend[cacheClientHttpRequests]: requests/sec
#ShortLegend[cacheClientHttpRequests]: req/s
#LegendI[cacheClientHttpRequests]: Requests 
#LegendO[cacheClientHttpRequests]:
#Legend1[cacheClientHttpRequests]: Requests
#Legend2[cacheClientHttpRequests]:

Target[cacheHttpHits]: cacheHttpHits&cacheHttpHits:public@192.168.1.1:3401
MaxBytes[cacheHttpHits]: 10000000
Title[cacheHttpHits]: HTTP Hits @ 192.168.1.1
Options[cacheHttpHits]: nopercent
PageTop[cacheHttpHits]:  

HTTP Hits @ 192.168.1.1


YLegend[cacheHttpHits]: hits/sec
ShortLegend[cacheHttpHits]: hits/s
LegendI[cacheHttpHits]: Hits 
LegendO[cacheHttpHits]:
Legend1[cacheHttpHits]: Hits
Legend2[cacheHttpHits]:

Target[cacheHttpErrors]: cacheHttpErrors&cacheHttpErrors:public@192.168.1.1:3401
MaxBytes[cacheHttpErrors]: 10000000
Title[cacheHttpErrors]: HTTP Errors @ 192.168.1.1
Options[cacheHttpErrors]: nopercent
PageTop[cacheHttpErrors]:  

HTTP Errors @ 192.168.1.1


YLegend[cacheHttpErrors]: errors/sec
ShortLegend[cacheHttpErrors]: err/s
LegendI[cacheHttpErrors]: Errors 
LegendO[cacheHttpErrors]:
Legend1[cacheHttpErrors]: Errors
Legend2[cacheHttpErrors]:

Target[cacheIcpPktsSentRecv]: cacheIcpPktsSent&cacheIcpPktsRecv:public@192.168.1.1:3401
MaxBytes[cacheIcpPktsSentRecv]: 10000000
Title[cacheIcpPktsSentRecv]: ICP Packets Sent/Received
Options[cacheIcpPktsSentRecv]: nopercent
PageTop[cacheIcpPktsSentRecv]:  

ICP Packets Sent/Recieved @ 192.168.1.1


YLegend[cacheIcpPktsSentRecv]: packets/sec
ShortLegend[cacheIcpPktsSentRecv]: pkts/s
LegendI[cacheIcpPktsSentRecv]: Pkts Sent 
LegendO[cacheIcpPktsSentRecv]: Pkts Received 
Legend1[cacheIcpPktsSentRecv]: Pkts Sent
Legend2[cacheIcpPktsSentRecv]: Pkts Received

Target[cacheIcpKbSentRecv]: cacheIcpKbSent&cacheIcpKbRecv:public@192.168.1.1:3401 * 1024
MaxBytes[cacheIcpKbSentRecv]: 1000000000
Title[cacheIcpKbSentRecv]: ICP bits Sent/Received
Options[cacheIcpKbSentRecv]: nopercent, bits
PageTop[cacheIcpKbSentRecv]:  

ICP bits Sent/Received @ 192.168.1.1


YLegend[cacheIcpKbSentRecv]: bits/sec
ShortLegend[cacheIcpKbSentRecv]: bits/s
LegendI[cacheIcpKbSentRecv]: Sent 
LegendO[cacheIcpKbSentRecv]: Received 
Legend1[cacheIcpKbSentRecv]: Sent
Legend2[cacheIcpKbSentRecv]: Received

Target[cacheHttpInOutKb]: cacheHttpInKb&cacheHttpOutKb:public@192.168.1.1:3401 * 1024
MaxBytes[cacheHttpInOutKb]: 1000000000
Title[cacheHttpInOutKb]: HTTP In/Out Traffic @ 192.168.1.1
Options[cacheHttpInOutKb]: nopercent, bits
PageTop[cacheHttpInOutKb]:  

HTTP In/Out Traffic @ 192.168.1.1


#YLegend[cacheHttpInOutKb]: Bytes/second
#ShortLegend[cacheHttpInOutKb]: Bytes/s
YLegend[cacheHttpInOutKb]: bits/second
ShortLegend[cacheHttpInOutKb]: bits/s
LegendI[cacheHttpInOutKb]: HTTP In 
LegendO[cacheHttpInOutKb]: HTTP Out 
Legend1[cacheHttpInOutKb]: HTTP In
Legend2[cacheHttpInOutKb]: HTTP Out

Target[cacheCurrentSwapSize]: cacheCurrentSwapSize&cacheCurrentSwapSize:public@192.168.1.1:3401
MaxBytes[cacheCurrentSwapSize]: 1000000000
Title[cacheCurrentSwapSize]: Current Swap Size @ 192.168.1.1
Options[cacheCurrentSwapSize]: gauge, nopercent
PageTop[cacheCurrentSwapSize]:  

Current Swap Size @ 192.168.1.1


YLegend[cacheCurrentSwapSize]: swap size
ShortLegend[cacheCurrentSwapSize]: Bytes
LegendI[cacheCurrentSwapSize]: Swap Size 
LegendO[cacheCurrentSwapSize]:
Legend1[cacheCurrentSwapSize]: Swap Size
Legend2[cacheCurrentSwapSize]:

Target[cacheNumObjCount]: cacheNumObjCount&cacheNumObjCount:public@192.168.1.1:3401
MaxBytes[cacheNumObjCount]: 10000000
Title[cacheNumObjCount]: Num Object Count @ 192.168.1.1
Options[cacheNumObjCount]: gauge, nopercent
PageTop[cacheNumObjCount]:  

Num Object Count @ 192.168.1.1


YLegend[cacheNumObjCount]: # of objects
ShortLegend[cacheNumObjCount]: objects
LegendI[cacheNumObjCount]: Num Objects 
LegendO[cacheNumObjCount]:
Legend1[cacheNumObjCount]: Num Objects
Legend2[cacheNumObjCount]:

Target[cacheCpuUsage]: cacheCpuUsage&cacheCpuUsage:public@192.168.1.1:3401
MaxBytes[cacheCpuUsage]: 100
AbsMax[cacheCpuUsage]: 100
Title[cacheCpuUsage]: CPU Usage @ 192.168.1.1
Options[cacheCpuUsage]: absolute, gauge, noinfo, nopercent
Unscaled[cacheCpuUsage]: dwmy
PageTop[cacheCpuUsage]:  

CPU Usage @ 192.168.1.1


YLegend[cacheCpuUsage]: usage %
ShortLegend[cacheCpuUsage]:%
LegendI[cacheCpuUsage]: CPU Usage 
LegendO[cacheCpuUsage]:
Legend1[cacheCpuUsage]: CPU Usage
Legend2[cacheCpuUsage]:

Target[cacheMemUsage]: cacheMemUsage&cacheMemUsage:public@192.168.1.1:3401 * 1024
MaxBytes[cacheMemUsage]: 2000000000
Title[cacheMemUsage]: Memory Usage
Options[cacheMemUsage]: gauge, nopercent
PageTop[cacheMemUsage]:  

Total memory accounted for @ 192.168.1.1


YLegend[cacheMemUsage]: Bytes
ShortLegend[cacheMemUsage]: Bytes
LegendI[cacheMemUsage]: Mem Usage 
LegendO[cacheMemUsage]:
Legend1[cacheMemUsage]: Mem Usage
Legend2[cacheMemUsage]:

Target[cacheSysPageFaults]: cacheSysPageFaults&cacheSysPageFaults:public@192.168.1.1:3401
MaxBytes[cacheSysPageFaults]: 10000000
Title[cacheSysPageFaults]: Sys Page Faults @ 192.168.1.1
Options[cacheSysPageFaults]: nopercent
PageTop[cacheSysPageFaults]:  

Sys Page Faults @ 192.168.1.1


YLegend[cacheSysPageFaults]: page faults/sec
ShortLegend[cacheSysPageFaults]: PF/s
LegendI[cacheSysPageFaults]: Page Faults 
LegendO[cacheSysPageFaults]:
Legend1[cacheSysPageFaults]: Page Faults
Legend2[cacheSysPageFaults]:

Target[cacheSysVMsize]: cacheSysVMsize&cacheSysVMsize:public@192.168.1.1:3401 * 1024
MaxBytes[cacheSysVMsize]: 1000000000
Title[cacheSysVMsize]: Storage Mem Size @ 192.168.1.1
Options[cacheSysVMsize]: gauge, nopercent
PageTop[cacheSysVMsize]:  

Storage Mem Size @ 192.168.1.1


YLegend[cacheSysVMsize]: mem size
ShortLegend[cacheSysVMsize]: Bytes
LegendI[cacheSysVMsize]: Mem Size 
LegendO[cacheSysVMsize]:
Legend1[cacheSysVMsize]: Mem Size
Legend2[cacheSysVMsize]:

Target[cacheSysStorage]: cacheSysStorage&cacheSysStorage:public@192.168.1.1:3401
MaxBytes[cacheSysStorage]: 1000000000
Title[cacheSysStorage]: Storage Swap Size @ 192.168.1.1
Options[cacheSysStorage]: gauge, nopercent
PageTop[cacheSysStorage]:  

Storage Swap Size @ 192.168.1.1


YLegend[cacheSysStorage]: swap size (KB)
ShortLegend[cacheSysStorage]: KBytes
LegendI[cacheSysStorage]: Swap Size 
LegendO[cacheSysStorage]:
Legend1[cacheSysStorage]: Swap Size
Legend2[cacheSysStorage]:

Target[cacheSysNumReads]: cacheSysNumReads&cacheSysNumReads:public@192.168.1.1:3401
MaxBytes[cacheSysNumReads]: 10000000
Title[cacheSysNumReads]: HTTP I/O number of reads @ 192.168.1.1
Options[cacheSysNumReads]: nopercent
PageTop[cacheSysNumReads]:  

HTTP I/O number of reads @ 192.168.1.1


YLegend[cacheSysNumReads]: reads/sec
ShortLegend[cacheSysNumReads]: reads/s
LegendI[cacheSysNumReads]: I/O 
LegendO[cacheSysNumReads]:
Legend1[cacheSysNumReads]: I/O
Legend2[cacheSysNumReads]:

Target[cacheCpuTime]: cacheCpuTime&cacheCpuTime:public@192.168.1.1:3401
MaxBytes[cacheCpuTime]: 1000000000
Title[cacheCpuTime]: Cpu Time
Options[cacheCpuTime]: gauge, nopercent
PageTop[cacheCpuTime]:  

Amount of cpu seconds consumed @ 192.168.1.1


YLegend[cacheCpuTime]: cpu seconds
ShortLegend[cacheCpuTime]: cpu seconds
LegendI[cacheCpuTime]: Mem Time 
LegendO[cacheCpuTime]:
Legend1[cacheCpuTime]: Mem Time
Legend2[cacheCpuTime]:

Target[cacheMaxResSize]: cacheMaxResSize&cacheMaxResSize:public@192.168.1.1:3401 * 1024
MaxBytes[cacheMaxResSize]: 1000000000
Title[cacheMaxResSize]: Max Resident Size
Options[cacheMaxResSize]: gauge, nopercent
PageTop[cacheMaxResSize]:  

Maximum Resident Size @ 192.168.1.1


YLegend[cacheMaxResSize]: Bytes
ShortLegend[cacheMaxResSize]: Bytes
LegendI[cacheMaxResSize]: Size 
LegendO[cacheMaxResSize]:
Legend1[cacheMaxResSize]: Size
Legend2[cacheMaxResSize]:

#Target[cacheCurrentLRUExpiration]: cacheCurrentLRUExpiration&cacheCurrentLRUExpiration:public@192.168.1.1:3401
#MaxBytes[cacheCurrentLRUExpiration]: 1000000000
#Title[cacheCurrentLRUExpiration]: LRU Expiration Age
#Options[cacheCurrentLRUExpiration]: gauge, nopercent
#PageTop[cacheCurrentLRUExpiration]:  

Storage LRU Expiration Age @ 192.168.1.1


#YLegend[cacheCurrentLRUExpiration]: expir (days)
#ShortLegend[cacheCurrentLRUExpiration]: days
#LegendI[cacheCurrentLRUExpiration]: Age 
#LegendO[cacheCurrentLRUExpiration]:
#Legend1[cacheCurrentLRUExpiration]: Age
#Legend2[cacheCurrentLRUExpiration]:

Target[cacheCurrentUnlinkRequests]: cacheCurrentUnlinkRequests&cacheCurrentUnlinkRequests:public@192.168.1.1:3401
MaxBytes[cacheCurrentUnlinkRequests]: 1000000000
Title[cacheCurrentUnlinkRequests]: Unlinkd Requests
Options[cacheCurrentUnlinkRequests]: nopercent
PageTop[cacheCurrentUnlinkRequests]:  

Requests given to unlinkd @ 192.168.1.1


YLegend[cacheCurrentUnlinkRequests]: requests/sec
ShortLegend[cacheCurrentUnlinkRequests]: reqs/s
LegendI[cacheCurrentUnlinkRequests]: Unlinkd requests 
LegendO[cacheCurrentUnlinkRequests]:
Legend1[cacheCurrentUnlinkRequests]: Unlinkd requests
Legend2[cacheCurrentUnlinkRequests]:

#Target[cacheCurrentUnusedFileDescrCount]: cacheCurrentUnusedFileDescrCount&cacheCurrentUnusedFileDescrCount:public@192.168.1.1:3401
#MaxBytes[cacheCurrentUnusedFileDescrCount]: 1000000000
#Title[cacheCurrentUnusedFileDescrCount]: Available File Descriptors
#Options[cacheCurrentUnusedFileDescrCount]: gauge, nopercent
#PageTop[cacheCurrentUnusedFileDescrCount]:  

Available number of file descriptors @ 192.168.1.1


#YLegend[cacheCurrentUnusedFileDescrCount]: # of FDs
#ShortLegend[cacheCurrentUnusedFileDescrCount]: FDs
#LegendI[cacheCurrentUnusedFileDescrCount]: File Descriptors 
#LegendO[cacheCurrentUnusedFileDescrCount]:
#Legend1[cacheCurrentUnusedFileDescrCount]: File Descriptors
#Legend2[cacheCurrentUnusedFileDescrCount]:

#Target[cacheCurrentReservedFileDescrCount]: cacheCurrentReservedFileDescrCount&cacheCurrentReservedFileDescrCount:public@192.168.1.1:3401
#MaxBytes[cacheCurrentReservedFileDescrCount]: 1000000000
#Title[cacheCurrentReservedFileDescrCount]: Reserved File Descriptors
#Options[cacheCurrentReservedFileDescrCount]: gauge, nopercent
#PageTop[cacheCurrentReservedFileDescrCount]:  

Reserved number of file descriptors @ 192.168.1.1


#YLegend[cacheCurrentReservedFileDescrCount]: # of FDs
#ShortLegend[cacheCurrentReservedFileDescrCount]: FDs
#LegendI[cacheCurrentReservedFileDescrCount]: File Descriptors 
#LegendO[cacheCurrentReservedFileDescrCount]:
#Legend1[cacheCurrentReservedFileDescrCount]: File Descriptors
#Legend2[cacheCurrentReservedFileDescrCount]:

Target[cacheClients]: cacheClients&cacheClients:public@192.168.1.1:3401
#Target[cacheClients]: 1.3.6.1.4.1.3495.1.3.2.1.15.0&1.3.6.1.4.1.3495.1.3.2.1.15.0:public@192.168.1.1:3401
MaxBytes[cacheClients]: 1000000000
Title[cacheClients]: Number of Clients
Options[cacheClients]: nopercent
PageTop[cacheClients]:  

Number of clients accessing cache @ 192.168.1.1


YLegend[cacheClients]: clients/sec
ShortLegend[cacheClients]: clients/s
LegendI[cacheClients]: Num Clients 
LegendO[cacheClients]:
Legend1[cacheClients]: Num Clients
Legend2[cacheClients]:

Target[cacheHttpAllSvcTime]: cacheHttpAllSvcTime.5&cacheHttpAllSvcTime.60:public@192.168.1.1:3401
MaxBytes[cacheHttpAllSvcTime]: 1000000000
Title[cacheHttpAllSvcTime]: HTTP All Service Time
Options[cacheHttpAllSvcTime]: gauge, nopercent
PageTop[cacheHttpAllSvcTime]:  

HTTP all service time @ 192.168.1.1


YLegend[cacheHttpAllSvcTime]: svc time (ms)
ShortLegend[cacheHttpAllSvcTime]: ms
LegendI[cacheHttpAllSvcTime]: Median Svc Time (5min) 
LegendO[cacheHttpAllSvcTime]: Median Svc Time (60min) 
Legend1[cacheHttpAllSvcTime]: Median Svc Time
Legend2[cacheHttpAllSvcTime]: Median Svc Time

Target[cacheHttpMissSvcTime]: cacheHttpMissSvcTime.5&cacheHttpMissSvcTime.60:public@192.168.1.1:3401
MaxBytes[cacheHttpMissSvcTime]: 1000000000
Title[cacheHttpMissSvcTime]: HTTP Miss Service Time
Options[cacheHttpMissSvcTime]: gauge, nopercent
PageTop[cacheHttpMissSvcTime]:  

HTTP miss service time @ 192.168.1.1


YLegend[cacheHttpMissSvcTime]: svc time (ms)
ShortLegend[cacheHttpMissSvcTime]: ms
LegendI[cacheHttpMissSvcTime]: Median Svc Time (5min) 
LegendO[cacheHttpMissSvcTime]: Median Svc Time (60min) 
Legend1[cacheHttpMissSvcTime]: Median Svc Time
Legend2[cacheHttpMissSvcTime]: Median Svc Time

Target[cacheHttpNmSvcTime]: cacheHttpNmSvcTime.5&cacheHttpNmSvcTime.60:public@192.168.1.1:3401
MaxBytes[cacheHttpNmSvcTime]: 1000000000
Title[cacheHttpNmSvcTime]: HTTP Near Miss Service Time
Options[cacheHttpNmSvcTime]: gauge, nopercent
PageTop[cacheHttpNmSvcTime]:  

HTTP near miss service time @ 192.168.1.1


YLegend[cacheHttpNmSvcTime]: svc time (ms)
ShortLegend[cacheHttpNmSvcTime]: ms
LegendI[cacheHttpNmSvcTime]: Median Svc Time (5min) 
LegendO[cacheHttpNmSvcTime]: Median Svc Time (60min) 
Legend1[cacheHttpNmSvcTime]: Median Svc Time
Legend2[cacheHttpNmSvcTime]: Median Svc Time

Target[cacheHttpHitSvcTime]: cacheHttpHitSvcTime.5&cacheHttpHitSvcTime.60:public@192.168.1.1:3401
MaxBytes[cacheHttpHitSvcTime]: 1000000000
Title[cacheHttpHitSvcTime]: HTTP Hit Service Time
Options[cacheHttpHitSvcTime]: gauge, nopercent
PageTop[cacheHttpHitSvcTime]:  

HTTP hit service time @ 192.168.1.1


YLegend[cacheHttpHitSvcTime]: svc time (ms)
ShortLegend[cacheHttpHitSvcTime]: ms
LegendI[cacheHttpHitSvcTime]: Median Svc Time (5min) 
LegendO[cacheHttpHitSvcTime]: Median Svc Time (60min) 
Legend1[cacheHttpHitSvcTime]: Median Svc Time
Legend2[cacheHttpHitSvcTime]: Median Svc Time

Target[cacheIcpQuerySvcTime]: cacheIcpQuerySvcTime.5&cacheIcpQuerySvcTime.60:public@192.168.1.1:3401
MaxBytes[cacheIcpQuerySvcTime]: 1000000000
Title[cacheIcpQuerySvcTime]: ICP Query Service Time
Options[cacheIcpQuerySvcTime]: gauge, nopercent
PageTop[cacheIcpQuerySvcTime]:  

ICP query service time @ 192.168.1.1


YLegend[cacheIcpQuerySvcTime]: svc time (ms)
ShortLegend[cacheIcpQuerySvcTime]: ms
LegendI[cacheIcpQuerySvcTime]: Median Svc Time (5min) 
LegendO[cacheIcpQuerySvcTime]: Median Svc Time (60min) 
Legend1[cacheIcpQuerySvcTime]: Median Svc Time
Legend2[cacheIcpQuerySvcTime]: Median Svc Time

Target[cacheIcpReplySvcTime]: cacheIcpReplySvcTime.5&cacheIcpReplySvcTime.60:public@192.168.1.1:3401
MaxBytes[cacheIcpReplySvcTime]: 1000000000
Title[cacheIcpReplySvcTime]: ICP Reply Service Time
Options[cacheIcpReplySvcTime]: gauge, nopercent
PageTop[cacheIcpReplySvcTime]:  

ICP reply service time @ 192.168.1.1


YLegend[cacheIcpReplySvcTime]: svc time (ms)
ShortLegend[cacheIcpReplySvcTime]: ms
LegendI[cacheIcpReplySvcTime]: Median Svc Time (5min) 
LegendO[cacheIcpReplySvcTime]: Median Svc Time (60min) 
Legend1[cacheIcpReplySvcTime]: Median Svc Time
Legend2[cacheIcpReplySvcTime]: Median Svc Time

Target[cacheDnsSvcTime]: cacheDnsSvcTime.5&cacheDnsSvcTime.60:public@192.168.1.1:3401
MaxBytes[cacheDnsSvcTime]: 1000000000
Title[cacheDnsSvcTime]: DNS Service Time
Options[cacheDnsSvcTime]: gauge, nopercent
PageTop[cacheDnsSvcTime]:  

DNS service time @ 192.168.1.1


YLegend[cacheDnsSvcTime]: svc time (ms)
ShortLegend[cacheDnsSvcTime]: ms
LegendI[cacheDnsSvcTime]: Median Svc Time (5min) 
LegendO[cacheDnsSvcTime]: Median Svc Time (60min) 
Legend1[cacheDnsSvcTime]: Median Svc Time
Legend2[cacheDnsSvcTime]: Median Svc Time

Target[cacheRequestHitRatio]: cacheRequestHitRatio.5&cacheRequestHitRatio.60:public@192.168.1.1:3401
MaxBytes[cacheRequestHitRatio]: 100
AbsMax[cacheRequestHitRatio]: 100
Title[cacheRequestHitRatio]: Request Hit Ratio @ 192.168.1.1
Options[cacheRequestHitRatio]: absolute, gauge, noinfo, nopercent
Unscaled[cacheRequestHitRatio]: dwmy
PageTop[cacheRequestHitRatio]:  

Request Hit Ratio @ 192.168.1.1


YLegend[cacheRequestHitRatio]: %
ShortLegend[cacheRequestHitRatio]: %
LegendI[cacheRequestHitRatio]: Median Hit Ratio (5min) 
LegendO[cacheRequestHitRatio]: Median Hit Ratio (60min) 
Legend1[cacheRequestHitRatio]: Median Hit Ratio
Legend2[cacheRequestHitRatio]: Median Hit Ratio

Target[cacheRequestByteRatio]: cacheRequestByteRatio.5&cacheRequestByteRatio.60:public@192.168.1.1:3401
MaxBytes[cacheRequestByteRatio]: 100
AbsMax[cacheRequestByteRatio]: 100
Title[cacheRequestByteRatio]: Byte Hit Ratio @ 192.168.1.1
Options[cacheRequestByteRatio]: absolute, gauge, noinfo, nopercent
Unscaled[cacheRequestByteRatio]: dwmy
PageTop[cacheRequestByteRatio]:  

Byte Hit Ratio @ 192.168.1.1


YLegend[cacheRequestByteRatio]: %
ShortLegend[cacheRequestByteRatio]:%
LegendI[cacheRequestByteRatio]: Median Hit Ratio (5min) 
LegendO[cacheRequestByteRatio]: Median Hit Ratio (60min) 
Legend1[cacheRequestByteRatio]: Median Hit Ratio
Legend2[cacheRequestByteRatio]: Median Hit Ratio

#Target[cacheBlockingGetHostByAddr]: cacheBlockingGetHostByAddr&cacheBlockingGetHostByAddr:public@192.168.1.1:3401
#MaxBytes[cacheBlockingGetHostByAddr]: 1000000000
#Title[cacheBlockingGetHostByAddr]: Blocking gethostbyaddr
#Options[cacheBlockingGetHostByAddr]: nopercent
#PageTop[cacheBlockingGetHostByAddr]:  

Blocking gethostbyaddr count @ 192.168.1.1


#YLegend[cacheBlockingGetHostByAddr]: blocks/sec
#ShortLegend[cacheBlockingGetHostByAddr]: blocks/s
#LegendI[cacheBlockingGetHostByAddr]: Blocking 
#LegendO[cacheBlockingGetHostByAddr]:
#Legend1[cacheBlockingGetHostByAddr]: Blocking
#Legend2[cacheBlockingGetHostByAddr]:

4.執行mrtg

5.Finish

你做看看,有問題再提出來(還有錯誤訊息)吧

ychuang

  • 可愛的小學生
  • *
  • 文章數: 24
    • 檢視個人資料
squid proxy server 的流量偵測...
« 回覆 #6 於: 2002-10-06 17:20 »
引述: "zoob"
我在這裡列出我的做法

1.重新compile squid
在重新下configure的時候,會加上--enable-snmp的選項

2.修改squid.conf
acl snmppublic snmp_community public
snmp_port 3401
snmp_access allow snmppublic all

restart squid service,看看有沒有listen 3401的udp port

你做看看,有問題再提出來(還有錯誤訊息)吧


為何我執行 /usr/local/squid/bin/RunCache,出現一行"Running: squid -sY  >> /usr/local/squid/var/squid.out 2>&1",就停在那裡了...
而squid 有在正常運作,按Ctrl+C之後,出現#字號, 但squid 卻停止了,
這是怎麼回事?

zoob

  • 鑽研的研究生
  • *****
  • 文章數: 776
    • 檢視個人資料
    • http://www.myunix.idv.tw
squid proxy server 的流量偵測...
« 回覆 #7 於: 2002-10-06 18:14 »
請試著將cache目錄砍掉,重新建立一個
再重新試一遍,再不行列出/var/log/message有關squid的訊息

ychuang

  • 可愛的小學生
  • *
  • 文章數: 24
    • 檢視個人資料
squid proxy server 的流量偵測...
« 回覆 #8 於: 2002-10-06 20:57 »
引述: "zoob"
請試著將cache目錄砍掉,重新建立一個
再重新試一遍,再不行列出/var/log/message有關squid的訊息


message的訊息...
Oct  6 20:50:40 linux2 squid[1390]: Starting Squid Cache version 2.5.STABLE1 for i686-pc-linux-gnu...
Oct  6 20:50:40 linux2 squid[1390]: Process ID 1390
Oct  6 20:50:40 linux2 squid[1390]: With 1024 file descriptors available
Oct  6 20:50:40 linux2 squid[1390]: Performing DNS Tests...

zoob

  • 鑽研的研究生
  • *****
  • 文章數: 776
    • 檢視個人資料
    • http://www.myunix.idv.tw
squid proxy server 的流量偵測...
« 回覆 #9 於: 2002-10-06 23:27 »
請問你有重新做一次嗎?

ychuang

  • 可愛的小學生
  • *
  • 文章數: 24
    • 檢視個人資料
squid proxy server 的流量偵測...
« 回覆 #10 於: 2002-10-07 18:11 »
引述: "zoob"
請問你有重新做一次嗎?


我有將cache目錄裡的子目錄刪掉再用squid -z 重新建立啊!是這樣嗎?

zoob

  • 鑽研的研究生
  • *****
  • 文章數: 776
    • 檢視個人資料
    • http://www.myunix.idv.tw
squid proxy server 的流量偵測...
« 回覆 #11 於: 2002-10-07 18:19 »
引述: "ychuang"
引述: "zoob"
請問你有重新做一次嗎?


我有將cache目錄裡的子目錄刪掉再用squid -z 重新建立啊!是這樣嗎?


不好意思,能否請你列出你的安裝步驟(從compile開始到執行squid -z)
另外還請你列出你的cache的permission好嗎?

ychuang

  • 可愛的小學生
  • *
  • 文章數: 24
    • 檢視個人資料
squid proxy server 的流量偵測...
« 回覆 #12 於: 2002-10-09 18:23 »
引述: "zoob"
引述: "ychuang"
引述: "zoob"
請問你有重新做一次嗎?


我有將cache目錄裡的子目錄刪掉再用squid -z 重新建立啊!是這樣嗎?


不好意思,能否請你列出你的安裝步驟(從compile開始到執行squid -z)
另外還請你列出你的cache的permission好嗎?


我是照vbird兄的設定,
./configure --prefix=/usr/local/squid --enable-icmp --enable-async-io=40 \
--enable-err-language="Traditional_Chinese" --enable-cache-digests \
--enable-snmp
make; make install

drwxr--r--   35 squid    squid        4096 10月  8 23:58 proxy_cache1/
drwxr--r--   35 squid    squid        4096 10月  8 23:58 proxy_cache2/
drwxr--r--   35 squid    squid        4096 10月  8 23:58 proxy_cache3/

zoob

  • 鑽研的研究生
  • *****
  • 文章數: 776
    • 檢視個人資料
    • http://www.myunix.idv.tw
squid proxy server 的流量偵測...
« 回覆 #13 於: 2002-10-09 23:18 »
請問你squid.conf裡面所設定的user和group是squid嗎?


另外請你執行squid -z後,直接把squid叫起來(不執行RunCache)會有錯誤訊息嗎?