PS:要转载请注明出处,本人版权所有。
PS: 这个只是基于《我自己》的理解,
如果和你的原则及想法相冲突,请谅解,勿喷。
环境说明
无
前言
本文是这个系列第八篇,也是本系列的终章,它们是:
- 《大模型基础补全计划(一)---重温一些深度学习相关的数学知识》 https://www.cnblogs.com/Iflyinsky/p/18717317
- 《大模型基础补全计划(二)---词嵌入(word embedding) 》 https://www.cnblogs.com/Iflyinsky/p/18775451
- 《大模型基础补全计划(三)---RNN实例与测试》 https://www.cnblogs.com/Iflyinsky/p/18967569
- 《大模型基础补全计划(四)---LSTM的实例与测试(RNN的改进)》 https://www.cnblogs.com/Iflyinsky/p/19091089
- 《大模型基础补全计划(五)---seq2seq实例与测试(编码器、解码器架构)》 https://www.cnblogs.com/Iflyinsky/p/19150535
- 《大模型基础补全计划(六)---带注意力机制的seq2seq实例与测试(Bahdanau Attention)》 https://www.cnblogs.com/Iflyinsky/p/19184558
- 《大模型基础补全计划(七)---Transformer(多头注意力、自注意力、位置编码)及实例与测试》https://www.cnblogs.com/Iflyinsky/p/19228410
本文主要是用一个实际的大模型例子来联系和回顾之前的知识点,让大家能够感受一些,前面文中的一些知识点是真正用到了实际大模型里面的哪些地方。
由于近期正在学习和应用的Qwen3-VL系列相关模型,因此这里挑了一个Qwen3-VL-2B-Instruct来独立分析,并联系和回顾之前的知识点。
注意:本文不会详细介绍Qwen3-VL-2B-Instruct的推理过程及原理,如果想学习详细的技术原理,请忽略本文内容,并查看其它相关的文章。
Qwen3-VL-2B-Instruct 简介
下载及运行
首先qwen3-vl的官方工程是 https://github.com/QwenLM/Qwen3-VL ,下面的官方示例的下载及变更推理代码(由于国内的原因,从魔塔下载):- modelscope download --model Qwen/Qwen3-VL-2B-Instruct --local_dir ./cache
复制代码- from transformers import AutoModelForImageTextToText, AutoProcessor
- model_path = "./cache"
- # default: Load the model on the available device(s)
- model = AutoModelForImageTextToText.from_pretrained(
- model_path, cache_dir=model_path, dtype="auto", device_map="auto"
- )
- # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
- # model = AutoModelForImageTextToText.from_pretrained(
- # "Qwen/Qwen3-VL-235B-A22B-Instruct",
- # dtype=torch.bfloat16,
- # attn_implementation="flash_attention_2",
- # device_map="auto",
- # )
- processor = AutoProcessor.from_pretrained(model_path, cache_dir=model_path)
- messages = [
- {
- "role": "user",
- "content": [
- {
- "type": "image",
- "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
- },
- {"type": "text", "text": "Describe this image."},
- ],
- }
- ]
- # Preparation for inference
- inputs = processor.apply_chat_template(
- messages,
- tokenize=True,
- add_generation_prompt=True,
- return_dict=True,
- return_tensors="pt"
- )
- inputs = inputs.to(model.device)
- # Inference: Generation of the output
- generated_ids = model.generate(**inputs, max_new_tokens=128)
- generated_ids_trimmed = [
- out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
- ]
- output_text = processor.batch_decode(
- generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
- )
- print(output_text)
复制代码
模型结构
我们在上面的例子基础上,添加如下代码打印其模型结构:- print(model)
- vit_model = model.visual
- llm_model = model.language_model
复制代码 得到的模型结构如下:- Qwen3VLForConditionalGeneration(
- (model): Qwen3VLModel(
- (visual): Qwen3VLVisionModel(
- (patch_embed): Qwen3VLVisionPatchEmbed(
- (proj): Conv3d(3, 1024, kernel_size=(2, 16, 16), stride=(2, 16, 16))
- )
- (pos_embed): Embedding(2304, 1024)
- (rotary_pos_emb): Qwen3VLVisionRotaryEmbedding()
- (blocks): ModuleList(
- (0-23): 24 x Qwen3VLVisionBlock(
- (norm1): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
- (norm2): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
- (attn): Qwen3VLVisionAttention(
- (qkv): Linear(in_features=1024, out_features=3072, bias=True)
- (proj): Linear(in_features=1024, out_features=1024, bias=True)
- )
- (mlp): Qwen3VLVisionMLP(
- (linear_fc1): Linear(in_features=1024, out_features=4096, bias=True)
- (linear_fc2): Linear(in_features=4096, out_features=1024, bias=True)
- (act_fn): GELUTanh()
- )
- )
- )
- (merger): Qwen3VLVisionPatchMerger(
- (norm): LayerNorm((1024,), eps=1e-06, elementwise_affine=True)
- (linear_fc1): Linear(in_features=4096, out_features=4096, bias=True)
- (act_fn): GELU(approximate='none')
- (linear_fc2): Linear(in_features=4096, out_features=2048, bias=True)
- )
- (deepstack_merger_list): ModuleList(
- (0-2): 3 x Qwen3VLVisionPatchMerger(
- (norm): LayerNorm((4096,), eps=1e-06, elementwise_affine=True)
- (linear_fc1): Linear(in_features=4096, out_features=4096, bias=True)
- (act_fn): GELU(approximate='none')
- (linear_fc2): Linear(in_features=4096, out_features=2048, bias=True)
- )
- )
- )
- (language_model): Qwen3VLTextModel(
- (embed_tokens): Embedding(151936, 2048)
- (layers): ModuleList(
- (0-27): 28 x Qwen3VLTextDecoderLayer(
- (self_attn): Qwen3VLTextAttention(
- (q_proj): Linear(in_features=2048, out_features=2048, bias=False)
- (k_proj): Linear(in_features=2048, out_features=1024, bias=False)
- (v_proj): Linear(in_features=2048, out_features=1024, bias=False)
- (o_proj): Linear(in_features=2048, out_features=2048, bias=False)
- (q_norm): Qwen3VLTextRMSNorm((128,), eps=1e-06)
- (k_norm): Qwen3VLTextRMSNorm((128,), eps=1e-06)
- )
- (mlp): Qwen3VLTextMLP(
- (gate_proj): Linear(in_features=2048, out_features=6144, bias=False)
- (up_proj): Linear(in_features=2048, out_features=6144, bias=False)
- (down_proj): Linear(in_features=6144, out_features=2048, bias=False)
- (act_fn): SiLUActivation()
- )
- (input_layernorm): Qwen3VLTextRMSNorm((2048,), eps=1e-06)
- (post_attention_layernorm): Qwen3VLTextRMSNorm((2048,), eps=1e-06)
- )
- )
- (norm): Qwen3VLTextRMSNorm((2048,), eps=1e-06)
- (rotary_emb): Qwen3VLTextRotaryEmbedding()
- )
- )
- (lm_head): Linear(in_features=2048, out_features=151936, bias=False)
- )
复制代码 从上面的模型结构来看,我们可以知道其分为两个部分,一个是visual,一个是language_model,这也是现在的视觉多模态的常见结构。
Qwen3-VL-2B-Instruct 的模型结构简单分析 及 知识回顾
还记得我们前面的模型中的词表这个概念吗?当时的做法是直接将将整个训练用到的文字映射成对应的id,将所有的id组合在一起作为一个词表。在现在的大模型中,其实就有类似的东西,一般放在tokenizer.json文件里面。对于当前这个模型来说,这里有几个特殊的东西说明一下:
- 以前文章中的/对应的是当前这个模型的/
- 由于是视觉多模态模型,当前这个模型还会有几个本文会用到的特殊token://,他们是用来描述一张图怎么被输入到大语言模型中被理解的。
- 一个token不一定对应一个文字,可能对应多个、或者零点几个字,感兴趣可以私下了解一下,其和文字编码有关系。
当上文的 processor.apply_chat_template执行后,然后得到的inputs会有如下四个内容:
- input_ids (做完tokenizer之后的输出,已经将输入的文字“Describe this image.”和图片占位符“*N”转换为了对应的token id)
- attention_mask (input_ids的掩码,用于屏蔽无效或者pad输入序列)
- pixel_values (图片预处理好的矩阵,不仅仅做了归一化,还做了分patch操作,本文不用太关注)
- image_grid_thw (本文用不上,别管。)
对于input_ids来说,我们知道里面有图片的占位符的token_id,这里后面会替换为真实的图像数据,这样才能把图、文字送入到大语言模型,当然,语音等也是一样的。
我们首先来看看上文model.generate调用之后发生了什么,他会经过一系列变化后,到达如下的Qwen3VLModel的forward的入口:- def forward(
- self,
- input_ids: torch.LongTensor = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[Cache] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- pixel_values: Optional[torch.Tensor] = None,
- pixel_values_videos: Optional[torch.FloatTensor] = None,
- image_grid_thw: Optional[torch.LongTensor] = None,
- video_grid_thw: Optional[torch.LongTensor] = None,
- cache_position: Optional[torch.LongTensor] = None,
- **kwargs: Unpack[TransformersKwargs],
- ) -> Union[tuple, Qwen3VLModelOutputWithPast]:
- r"""
- image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*):
- The temporal, height and width of feature shape of each image in LLM.
- video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*):
- The temporal, height and width of feature shape of each video in LLM.
- """
- if (input_ids is None) ^ (inputs_embeds is not None):
- raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
- if inputs_embeds is None:
- inputs_embeds = self.get_input_embeddings()(input_ids)
- image_mask = None
- video_mask = None
- if pixel_values is not None:
- image_embeds, deepstack_image_embeds = self.get_image_features(pixel_values, image_grid_thw)
- image_embeds = torch.cat(image_embeds, dim=0).to(inputs_embeds.device, inputs_embeds.dtype)
- image_mask, _ = self.get_placeholder_mask(
- input_ids, inputs_embeds=inputs_embeds, image_features=image_embeds
- )
- inputs_embeds = inputs_embeds.masked_scatter(image_mask, image_embeds)
- if pixel_values_videos is not None:
- video_embeds, deepstack_video_embeds = self.get_video_features(pixel_values_videos, video_grid_thw)
- video_embeds = torch.cat(video_embeds, dim=0).to(inputs_embeds.device, inputs_embeds.dtype)
- _, video_mask = self.get_placeholder_mask(
- input_ids, inputs_embeds=inputs_embeds, video_features=video_embeds
- )
- inputs_embeds = inputs_embeds.masked_scatter(video_mask, video_embeds)
- visual_pos_masks = None
- deepstack_visual_embeds = None
- if image_mask is not None and video_mask is not None:
- # aggregate visual_pos_masks and deepstack_visual_embeds
- image_mask = image_mask[..., 0]
- video_mask = video_mask[..., 0]
- visual_pos_masks = image_mask | video_mask
- deepstack_visual_embeds = []
- image_mask_joint = image_mask[visual_pos_masks]
- video_mask_joint = video_mask[visual_pos_masks]
- for img_embed, vid_embed in zip(deepstack_image_embeds, deepstack_video_embeds):
- embed_joint = img_embed.new_zeros(visual_pos_masks.sum(), img_embed.shape[-1]).to(img_embed.device)
- embed_joint[image_mask_joint, :] = img_embed
- embed_joint[video_mask_joint, :] = vid_embed
- deepstack_visual_embeds.append(embed_joint)
- elif image_mask is not None:
- image_mask = image_mask[..., 0]
- visual_pos_masks = image_mask
- deepstack_visual_embeds = deepstack_image_embeds
- elif video_mask is not None:
- video_mask = video_mask[..., 0]
- visual_pos_masks = video_mask
- deepstack_visual_embeds = deepstack_video_embeds
- if position_ids is None:
- attention_mask_tensor = (
- attention_mask if not isinstance(attention_mask, dict) else attention_mask["full_attention"]
- )
- if attention_mask_tensor is not None and attention_mask_tensor.ndim == 4:
- attention_mask_tensor = torch.diagonal(attention_mask_tensor[:, 0], dim1=1, dim2=2)
- # Only apply conversion for floating point tensors (inverted masks)
- if attention_mask_tensor.dtype.is_floating_point:
- attention_mask_tensor = attention_mask_tensor / torch.finfo(attention_mask_tensor.dtype).min
- attention_mask_tensor = (1.0 - attention_mask_tensor).int()
- # Calculate RoPE index once per generation in the pre-fill stage only.
- # When compiling, we can't check tensor values thus we check only input length
- # It is safe to assume that `length!=1` means we're in pre-fill because compiled
- # models currently cannot do asssisted decoding
- prefill_compiled_stage = is_torchdynamo_compiling() and (
- (input_ids is not None and input_ids.shape[1] != 1)
- or (inputs_embeds is not None and inputs_embeds.shape[1] != 1)
- )
- prefill_noncompiled_stage = not is_torchdynamo_compiling() and (
- (cache_position is not None and cache_position[0] == 0)
- or (past_key_values is None or past_key_values.get_seq_length() == 0)
- )
- if (prefill_compiled_stage or prefill_noncompiled_stage) or self.rope_deltas is None:
- position_ids, rope_deltas = self.get_rope_index(
- input_ids,
- image_grid_thw,
- video_grid_thw,
- attention_mask=attention_mask_tensor,
- )
- self.rope_deltas = rope_deltas
- # then use the prev pre-calculated rope-deltas to get the correct position ids
- else:
- batch_size, seq_length, _ = inputs_embeds.shape
- delta = (
- (cache_position[0] + self.rope_deltas).to(inputs_embeds.device)
- if cache_position is not None
- else 0
- )
- position_ids = torch.arange(seq_length, device=inputs_embeds.device)
- position_ids = position_ids.view(1, -1).expand(batch_size, -1)
- if cache_position is not None: # otherwise `deltas` is an int `0`
- delta = delta.repeat_interleave(batch_size // delta.shape[0], dim=0)
- position_ids = position_ids.add(delta)
- position_ids = position_ids.unsqueeze(0).expand(3, -1, -1)
- outputs = self.language_model(
- input_ids=None,
- position_ids=position_ids,
- attention_mask=attention_mask,
- past_key_values=past_key_values,
- inputs_embeds=inputs_embeds,
- cache_position=cache_position,
- visual_pos_masks=visual_pos_masks,
- deepstack_visual_embeds=deepstack_visual_embeds,
- **kwargs,
- )
- return Qwen3VLModelOutputWithPast(
- last_hidden_state=outputs.last_hidden_state,
- past_key_values=outputs.past_key_values,
- rope_deltas=self.rope_deltas,
- )
复制代码 看上面的代码,我们来看看 input_ids 中的主要的几个数据分别做了什么:
- input_ids 通过get_input_embeddings获取了input_ids对应的原始inputs_embeds,这一步和我们以前文章中做embedding是一样的。唯一注意的,这里的embedding向量里面包含对应的嵌入向量,是占位的,后面要替换为真实的数据。
- pixel_values 通过get_image_features获取了图像数据对应的image_embeds,这里对应Qwen3VLVisionModel的推理过程,下面会简单说明一下。
- 在masked_scatter中,将inputs_embeds中的占位向量替换为image_embeds。
- 根据输入的inputs_embeds,获取token对应的position_ids,也就是获取位置信息,在前面的文中提到了为什么transformer需要位置信息。
- 将最终的position_ids,attention_mask,inputs_embeds,past_key_values(此项内容在下文解释)给Qwen3VLTextModel进行推理得到logits序列
- 然后将logits按采样参数进行采样,得到最终的输出的文字token,然后进行tokenizer解码,得到最终输出的文字。(此部分不在上面所在代码范围内部,但是是大模型的后处理部分的必要逻辑部分。)
我们从上文已经知道,其模型分为两个部分,下面分别简单介绍这两部分的forward过程,看看我们之前提到的知识点在真实的多模态大模型中是怎么样的存在。
visual 部分简单分析
本系列文章严格来说是不应该涉及到多模态大模型的,但是现在常见的多模态大模型应用场景已经逐渐扩大,因此这里用视觉多模态大模型为例子,看看视觉多模态大模型和普通的大模型有什么区别,首先visual部分的forward代码如下:- def forward(self, hidden_states: torch.Tensor, grid_thw: torch.Tensor, **kwargs) -> torch.Tensor:
- """
- Args:
- hidden_states (`torch.Tensor` of shape `(seq_len, hidden_size)`):
- The final hidden states of the model.
- grid_thw (`torch.Tensor` of shape `(num_images_or_videos, 3)`):
- The temporal, height and width of feature shape of each image in LLM.
- Returns:
- `torch.Tensor`: hidden_states.
- """
- hidden_states = self.patch_embed(hidden_states)
- pos_embeds = self.fast_pos_embed_interpolate(grid_thw)
- hidden_states = hidden_states + pos_embeds
- rotary_pos_emb = self.rot_pos_emb(grid_thw)
- seq_len, _ = hidden_states.size()
- hidden_states = hidden_states.reshape(seq_len, -1)
- rotary_pos_emb = rotary_pos_emb.reshape(seq_len, -1)
- emb = torch.cat((rotary_pos_emb, rotary_pos_emb), dim=-1)
- position_embeddings = (emb.cos(), emb.sin())
- cu_seqlens = torch.repeat_interleave(grid_thw[:, 1] * grid_thw[:, 2], grid_thw[:, 0]).cumsum(
- dim=0,
- # Select dtype based on the following factors:
- # - FA2 requires that cu_seqlens_q must have dtype int32
- # - torch.onnx.export requires that cu_seqlens_q must have same dtype as grid_thw
- # See https://github.com/huggingface/transformers/pull/34852 for more information
- dtype=grid_thw.dtype if torch.jit.is_tracing() else torch.int32,
- )
- cu_seqlens = F.pad(cu_seqlens, (1, 0), value=0)
- deepstack_feature_lists = []
- for layer_num, blk in enumerate(self.blocks):
- hidden_states = blk(
- hidden_states,
- cu_seqlens=cu_seqlens,
- position_embeddings=position_embeddings,
- **kwargs,
- )
- if layer_num in self.deepstack_visual_indexes:
- deepstack_feature = self.deepstack_merger_list[self.deepstack_visual_indexes.index(layer_num)](
- hidden_states
- )
- deepstack_feature_lists.append(deepstack_feature)
- hidden_states = self.merger(hidden_states)
- return hidden_states, deepstack_feature_lists
复制代码 由于本文并不是要细节介绍这个模型的结构,因此这里我们只需要知道其输入是:预处理好的图片数据+grid_thw,其输出是:hidden_states+deepstack_feature_lists。其中最重要的就是输出的hidden_states,它含义是图片token的ebedding向量矩阵,在上面已经提到了其作用。
language_model 部分分析
对于语言模型部分来说,这个部分才是和我们前面训练的模型比较像的,下面我们先来看看其forward过程:- def forward(
- self,
- input_ids: Optional[torch.LongTensor] = None,
- attention_mask: Optional[torch.Tensor] = None,
- position_ids: Optional[torch.LongTensor] = None,
- past_key_values: Optional[Cache] = None,
- inputs_embeds: Optional[torch.FloatTensor] = None,
- use_cache: Optional[bool] = None,
- cache_position: Optional[torch.LongTensor] = None,
- # args for deepstack
- visual_pos_masks: Optional[torch.Tensor] = None,
- deepstack_visual_embeds: Optional[list[torch.Tensor]] = None,
- **kwargs: Unpack[FlashAttentionKwargs],
- ) -> Union[tuple, BaseModelOutputWithPast]:
- r"""
- visual_pos_masks (`torch.Tensor` of shape `(batch_size, seqlen)`, *optional*):
- The mask of the visual positions.
- deepstack_visual_embeds (`list[torch.Tensor]`, *optional*):
- The deepstack visual embeddings. The shape is (num_layers, visual_seqlen, embed_dim).
- The feature is extracted from the different visual encoder layers, and fed to the decoder
- hidden states. It's from the paper DeepStack(https://arxiv.org/abs/2406.04334).
- """
- if (input_ids is None) ^ (inputs_embeds is not None):
- raise ValueError("You must specify exactly one of input_ids or inputs_embeds")
- # torch.jit.trace() doesn't support cache objects in the output
- if use_cache and past_key_values is None and not torch.jit.is_tracing():
- past_key_values = DynamicCache(config=self.config)
- if inputs_embeds is None:
- inputs_embeds = self.embed_tokens(input_ids)
- if cache_position is None:
- past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
- cache_position = torch.arange(
- past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
- )
- # the hard coded `3` is for temporal, height and width.
- if position_ids is None:
- position_ids = cache_position.view(1, 1, -1).expand(3, inputs_embeds.shape[0], -1)
- elif position_ids.ndim == 2:
- position_ids = position_ids[None, ...].expand(3, position_ids.shape[0], -1)
- if position_ids.ndim == 3 and position_ids.shape[0] == 4:
- text_position_ids = position_ids[0]
- position_ids = position_ids[1:]
- else:
- text_position_ids = position_ids[0]
- attention_mask = create_causal_mask(
- config=self.config,
- input_embeds=inputs_embeds,
- attention_mask=attention_mask,
- cache_position=cache_position,
- past_key_values=past_key_values,
- position_ids=text_position_ids,
- )
- hidden_states = inputs_embeds
- # create position embeddings to be shared across the decoder layers
- position_embeddings = self.rotary_emb(hidden_states, position_ids)
- # decoder layers
- for layer_idx, decoder_layer in enumerate(self.layers):
- layer_outputs = decoder_layer(
- hidden_states,
- attention_mask=attention_mask,
- position_ids=text_position_ids,
- past_key_values=past_key_values,
- cache_position=cache_position,
- position_embeddings=position_embeddings,
- **kwargs,
- )
- hidden_states = layer_outputs
- # add visual features to the hidden states of first several layers
- if deepstack_visual_embeds is not None and layer_idx in range(len(deepstack_visual_embeds)):
- hidden_states = self._deepstack_process(
- hidden_states,
- visual_pos_masks,
- deepstack_visual_embeds[layer_idx],
- )
- hidden_states = self.norm(hidden_states)
- return BaseModelOutputWithPast(
- last_hidden_state=hidden_states,
- past_key_values=past_key_values,
- )
复制代码 我们看到了将position_ids,attention_mask,inputs_embeds,past_key_values传入推理过程后,得到了两个重要的内容,一个logits,一个past_key_values,下面重点介绍一下这两个是什么:
- logits 输出的是一次推理后,词表大小的一个概率矩阵,然后根据我们的采样相关参数(例如我们常见的:Temperature/Top P/Frequency Penalty等就是在这一阶段生效),选择对应的token_id,然后转换为文字。
- past_key_values 保存的是每一层decoder layer的注意力机制里面的K/V内容,也就是我们常见的KV Cache一词的存在的地方。
最后我们来看看现在常见的KV cache(缓存命中、缓存未命中)到底意味着什么?我们举一个简单直观的例子:我们保存了“你好”的KV cache,那我们再一次推理“你好世界。”,那么我们可以直接使用“你好”的KV cache,不用重复计算前面部分,可以直接计算新的部分,加快推理速度、减少了计算资源使用。
后记
本文基于Qwen3-VL-2B-Instruct,回顾了之前的一些知识,从这里我们可以看到,当前大模型里面用到的好多知识点,其实都来自于以前的某个地方。
本系列到此,完结散花。
参考文献
- https://github.com/QwenLM/Qwen3-VL
打赏、订阅、收藏、丢香蕉、硬币,请关注公众号(攻城狮的搬砖之路) PS: 请尊重原创,不喜勿喷。
PS: 要转载请注明出处,本人版权所有。
PS: 有问题请留言,看到后我会第一时间回复。
来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作! |